00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2032 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3297 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.642 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.653 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.665 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.665 > git config core.sparsecheckout # timeout=10 00:00:05.676 > git read-tree -mu HEAD # timeout=10 00:00:05.692 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.711 Commit message: "packer: Add bios builder" 00:00:05.711 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:05.821 [Pipeline] Start of Pipeline 00:00:05.833 [Pipeline] library 00:00:05.834 Loading library shm_lib@master 00:00:05.834 Library shm_lib@master is cached. Copying from home. 00:00:05.847 [Pipeline] node 00:00:05.858 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:05.860 [Pipeline] { 00:00:05.871 [Pipeline] catchError 00:00:05.872 [Pipeline] { 00:00:05.886 [Pipeline] wrap 00:00:05.896 [Pipeline] { 00:00:05.903 [Pipeline] stage 00:00:05.904 [Pipeline] { (Prologue) 00:00:06.077 [Pipeline] sh 00:00:06.357 + logger -p user.info -t JENKINS-CI 00:00:06.372 [Pipeline] echo 00:00:06.373 Node: WFP19 00:00:06.383 [Pipeline] sh 00:00:06.676 [Pipeline] setCustomBuildProperty 00:00:06.687 [Pipeline] echo 00:00:06.688 Cleanup processes 00:00:06.691 [Pipeline] sh 00:00:06.968 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.968 3118191 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.980 [Pipeline] sh 00:00:07.259 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:07.259 ++ grep -v 'sudo pgrep' 00:00:07.259 ++ awk '{print $1}' 00:00:07.259 + sudo kill -9 00:00:07.259 + true 00:00:07.270 [Pipeline] cleanWs 00:00:07.277 [WS-CLEANUP] Deleting project workspace... 00:00:07.277 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.283 [WS-CLEANUP] done 00:00:07.286 [Pipeline] setCustomBuildProperty 00:00:07.298 [Pipeline] sh 00:00:07.575 + sudo git config --global --replace-all safe.directory '*' 00:00:07.642 [Pipeline] httpRequest 00:00:07.663 [Pipeline] echo 00:00:07.665 Sorcerer 10.211.164.101 is alive 00:00:07.672 [Pipeline] httpRequest 00:00:07.676 HttpMethod: GET 00:00:07.677 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.677 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.692 Response Code: HTTP/1.1 200 OK 00:00:07.692 Success: Status code 200 is in the accepted range: 200,404 00:00:07.693 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.687 [Pipeline] sh 00:00:14.969 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:14.986 [Pipeline] httpRequest 00:00:15.019 [Pipeline] echo 00:00:15.020 Sorcerer 10.211.164.101 is alive 00:00:15.029 [Pipeline] httpRequest 00:00:15.033 HttpMethod: GET 00:00:15.034 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:15.034 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:15.035 Response Code: HTTP/1.1 200 OK 00:00:15.035 Success: Status code 200 is in the accepted range: 200,404 00:00:15.036 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:34.447 [Pipeline] sh 00:00:34.731 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:38.036 [Pipeline] sh 00:00:38.323 + git -C spdk log --oneline -n5 00:00:38.323 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:38.323 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:38.323 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:38.323 d005e023b raid: fix empty slot not updated in sb after resize 00:00:38.323 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:38.342 [Pipeline] withCredentials 00:00:38.353 > git --version # timeout=10 00:00:38.367 > git --version # 'git version 2.39.2' 00:00:38.385 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:38.387 [Pipeline] { 00:00:38.398 [Pipeline] retry 00:00:38.401 [Pipeline] { 00:00:38.420 [Pipeline] sh 00:00:38.704 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:38.975 [Pipeline] } 00:00:38.998 [Pipeline] // retry 00:00:39.004 [Pipeline] } 00:00:39.026 [Pipeline] // withCredentials 00:00:39.039 [Pipeline] httpRequest 00:00:39.069 [Pipeline] echo 00:00:39.071 Sorcerer 10.211.164.101 is alive 00:00:39.080 [Pipeline] httpRequest 00:00:39.085 HttpMethod: GET 00:00:39.086 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:39.086 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:39.113 Response Code: HTTP/1.1 200 OK 00:00:39.113 Success: Status code 200 is in the accepted range: 200,404 00:00:39.114 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.656 [Pipeline] sh 00:01:27.933 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.846 [Pipeline] sh 00:01:30.126 + git -C dpdk log --oneline -n5 00:01:30.126 caf0f5d395 version: 22.11.4 00:01:30.126 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:30.126 dc9c799c7d vhost: fix missing spinlock unlock 00:01:30.126 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:30.126 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:30.136 [Pipeline] } 00:01:30.155 [Pipeline] // stage 00:01:30.165 [Pipeline] stage 00:01:30.167 [Pipeline] { (Prepare) 00:01:30.189 [Pipeline] writeFile 00:01:30.207 [Pipeline] sh 00:01:30.486 + logger -p user.info -t JENKINS-CI 00:01:30.499 [Pipeline] sh 00:01:30.777 + logger -p user.info -t JENKINS-CI 00:01:30.788 [Pipeline] sh 00:01:31.067 + cat autorun-spdk.conf 00:01:31.067 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.068 SPDK_TEST_BLOCKDEV=1 00:01:31.068 SPDK_TEST_ISAL=1 00:01:31.068 SPDK_TEST_CRYPTO=1 00:01:31.068 SPDK_TEST_REDUCE=1 00:01:31.068 SPDK_TEST_VBDEV_COMPRESS=1 00:01:31.068 SPDK_RUN_UBSAN=1 00:01:31.068 SPDK_TEST_ACCEL=1 00:01:31.068 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.068 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:31.074 RUN_NIGHTLY=1 00:01:31.101 [Pipeline] readFile 00:01:31.127 [Pipeline] withEnv 00:01:31.130 [Pipeline] { 00:01:31.146 [Pipeline] sh 00:01:31.427 + set -ex 00:01:31.427 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:01:31.427 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:31.427 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.427 ++ SPDK_TEST_BLOCKDEV=1 00:01:31.427 ++ SPDK_TEST_ISAL=1 00:01:31.427 ++ SPDK_TEST_CRYPTO=1 00:01:31.427 ++ SPDK_TEST_REDUCE=1 00:01:31.427 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:31.427 ++ SPDK_RUN_UBSAN=1 00:01:31.427 ++ SPDK_TEST_ACCEL=1 00:01:31.427 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.427 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:31.427 ++ RUN_NIGHTLY=1 00:01:31.427 + case $SPDK_TEST_NVMF_NICS in 00:01:31.427 + DRIVERS= 00:01:31.427 + [[ -n '' ]] 00:01:31.427 + exit 0 00:01:31.437 [Pipeline] } 00:01:31.457 [Pipeline] // withEnv 00:01:31.463 [Pipeline] } 00:01:31.482 [Pipeline] // stage 00:01:31.492 [Pipeline] catchError 00:01:31.493 [Pipeline] { 00:01:31.510 [Pipeline] timeout 00:01:31.510 Timeout set to expire in 1 hr 0 min 00:01:31.511 [Pipeline] { 00:01:31.524 [Pipeline] stage 00:01:31.526 [Pipeline] { (Tests) 00:01:31.541 [Pipeline] sh 00:01:31.820 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:01:31.821 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:01:31.821 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:01:31.821 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:01:31.821 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:31.821 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:01:31.821 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:01:31.821 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:31.821 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:01:31.821 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:31.821 + [[ crypto-phy-autotest == pkgdep-* ]] 00:01:31.821 + cd /var/jenkins/workspace/crypto-phy-autotest 00:01:31.821 + source /etc/os-release 00:01:31.821 ++ NAME='Fedora Linux' 00:01:31.821 ++ VERSION='38 (Cloud Edition)' 00:01:31.821 ++ ID=fedora 00:01:31.821 ++ VERSION_ID=38 00:01:31.821 ++ VERSION_CODENAME= 00:01:31.821 ++ PLATFORM_ID=platform:f38 00:01:31.821 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:31.821 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.821 ++ LOGO=fedora-logo-icon 00:01:31.821 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:31.821 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.821 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:31.821 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.821 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.821 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.821 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:31.821 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.821 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:31.821 ++ SUPPORT_END=2024-05-14 00:01:31.821 ++ VARIANT='Cloud Edition' 00:01:31.821 ++ VARIANT_ID=cloud 00:01:31.821 + uname -a 00:01:31.821 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:31.821 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:01:36.007 Hugepages 00:01:36.007 node hugesize free / total 00:01:36.007 node0 1048576kB 0 / 0 00:01:36.007 node0 2048kB 0 / 0 00:01:36.007 node1 1048576kB 0 / 0 00:01:36.007 node1 2048kB 0 / 0 00:01:36.007 00:01:36.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.007 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:36.007 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:36.007 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:36.007 + rm -f /tmp/spdk-ld-path 00:01:36.007 + source autorun-spdk.conf 00:01:36.007 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.007 ++ SPDK_TEST_BLOCKDEV=1 00:01:36.007 ++ SPDK_TEST_ISAL=1 00:01:36.007 ++ SPDK_TEST_CRYPTO=1 00:01:36.007 ++ SPDK_TEST_REDUCE=1 00:01:36.007 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:36.007 ++ SPDK_RUN_UBSAN=1 00:01:36.007 ++ SPDK_TEST_ACCEL=1 00:01:36.007 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:36.007 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:36.007 ++ RUN_NIGHTLY=1 00:01:36.007 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.007 + [[ -n '' ]] 00:01:36.007 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:36.007 + for M in /var/spdk/build-*-manifest.txt 00:01:36.007 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.007 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:36.007 + for M in /var/spdk/build-*-manifest.txt 00:01:36.007 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.007 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:36.007 ++ uname 00:01:36.007 + [[ Linux == \L\i\n\u\x ]] 00:01:36.008 + sudo dmesg -T 00:01:36.008 + sudo dmesg --clear 00:01:36.008 + dmesg_pid=3119300 00:01:36.008 + [[ Fedora Linux == FreeBSD ]] 00:01:36.008 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.008 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.008 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.008 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.008 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.008 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.008 + sudo dmesg -Tw 00:01:36.008 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.008 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.008 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.008 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.008 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.008 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.008 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.008 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.008 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:36.008 Test configuration: 00:01:36.008 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.008 SPDK_TEST_BLOCKDEV=1 00:01:36.008 SPDK_TEST_ISAL=1 00:01:36.008 SPDK_TEST_CRYPTO=1 00:01:36.008 SPDK_TEST_REDUCE=1 00:01:36.008 SPDK_TEST_VBDEV_COMPRESS=1 00:01:36.008 SPDK_RUN_UBSAN=1 00:01:36.008 SPDK_TEST_ACCEL=1 00:01:36.008 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:36.008 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:36.008 RUN_NIGHTLY=1 10:11:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:01:36.008 10:11:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.008 10:11:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.008 10:11:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.008 10:11:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.008 10:11:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.008 10:11:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.008 10:11:48 -- paths/export.sh@5 -- $ export PATH 00:01:36.008 10:11:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.008 10:11:48 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:01:36.008 10:11:48 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:36.008 10:11:48 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721981508.XXXXXX 00:01:36.008 10:11:48 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721981508.TrC42a 00:01:36.008 10:11:48 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:36.008 10:11:48 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:36.008 10:11:48 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:36.008 10:11:48 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk' 00:01:36.008 10:11:48 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:36.008 10:11:48 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.008 10:11:48 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:36.008 10:11:48 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:36.008 10:11:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.008 10:11:48 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build' 00:01:36.008 10:11:48 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:36.008 10:11:48 -- pm/common@17 -- $ local monitor 00:01:36.008 10:11:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.008 10:11:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.008 10:11:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.008 10:11:48 -- pm/common@21 -- $ date +%s 00:01:36.008 10:11:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.008 10:11:48 -- pm/common@21 -- $ date +%s 00:01:36.008 10:11:48 -- pm/common@25 -- $ sleep 1 00:01:36.008 10:11:48 -- pm/common@21 -- $ date +%s 00:01:36.008 10:11:48 -- pm/common@21 -- $ date +%s 00:01:36.008 10:11:48 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721981508 00:01:36.008 10:11:48 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721981508 00:01:36.008 10:11:48 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721981508 00:01:36.008 10:11:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721981508 00:01:36.008 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721981508_collect-vmstat.pm.log 00:01:36.008 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721981508_collect-cpu-load.pm.log 00:01:36.008 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721981508_collect-cpu-temp.pm.log 00:01:36.008 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721981508_collect-bmc-pm.bmc.pm.log 00:01:36.942 10:11:49 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:36.942 10:11:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.942 10:11:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.942 10:11:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:36.942 10:11:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.942 Fri Jul 26 08:11:49 AM UTC 2024 00:01:36.942 10:11:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.942 v24.09-pre-321-g704257090 00:01:36.942 10:11:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.942 10:11:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.942 10:11:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.942 10:11:49 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.942 10:11:49 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.942 10:11:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.201 ************************************ 00:01:37.201 START TEST ubsan 00:01:37.201 ************************************ 00:01:37.201 10:11:49 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:37.201 using ubsan 00:01:37.201 00:01:37.201 real 0m0.001s 00:01:37.201 user 0m0.000s 00:01:37.201 sys 0m0.000s 00:01:37.201 10:11:49 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:37.201 10:11:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.201 ************************************ 00:01:37.201 END TEST ubsan 00:01:37.201 ************************************ 00:01:37.201 10:11:49 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:37.201 10:11:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:37.201 10:11:49 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:37.201 10:11:49 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:37.201 10:11:49 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:37.201 10:11:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.201 ************************************ 00:01:37.201 START TEST build_native_dpdk 00:01:37.201 ************************************ 00:01:37.201 10:11:49 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/dpdk ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/crypto-phy-autotest/dpdk log --oneline -n 5 00:01:37.201 caf0f5d395 version: 22.11.4 00:01:37.201 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:37.201 dc9c799c7d vhost: fix missing spinlock unlock 00:01:37.201 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:37.201 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 1 -eq 1 ]] 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@104 -- $ intel_ipsec_mb_ver=v0.54 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@105 -- $ intel_ipsec_mb_drv=crypto/aesni_mb 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@106 -- $ intel_ipsec_lib= 00:01:37.201 10:11:49 build_native_dpdk -- common/autobuild_common.sh@107 -- $ ge 22.11.4 21.11.0 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.11.0 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:37.202 10:11:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:01:37.202 10:11:49 build_native_dpdk -- common/autobuild_common.sh@112 -- $ intel_ipsec_mb_ver=v1.0 00:01:37.202 10:11:49 build_native_dpdk -- common/autobuild_common.sh@113 -- $ intel_ipsec_mb_drv=crypto/ipsec_mb 00:01:37.202 10:11:49 build_native_dpdk -- common/autobuild_common.sh@114 -- $ intel_ipsec_lib=lib 00:01:37.202 10:11:49 build_native_dpdk -- common/autobuild_common.sh@116 -- $ git clone --branch v1.0 --depth 1 https://github.com/intel/intel-ipsec-mb.git /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb 00:01:37.202 Cloning into '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb'... 00:01:38.576 Note: switching to 'a1a289dabb23be78d6531de481ba6a417c67b0a5'. 00:01:38.576 00:01:38.576 You are in 'detached HEAD' state. You can look around, make experimental 00:01:38.576 changes and commit them, and you can discard any commits you make in this 00:01:38.576 state without impacting any branches by switching back to a branch. 00:01:38.576 00:01:38.576 If you want to create a new branch to retain commits you create, you may 00:01:38.576 do so (now or later) by using -c with the switch command. Example: 00:01:38.576 00:01:38.576 git switch -c 00:01:38.576 00:01:38.576 Or undo this operation with: 00:01:38.576 00:01:38.576 git switch - 00:01:38.576 00:01:38.576 Turn off this advice by setting config variable advice.detachedHead to false 00:01:38.576 00:01:38.576 10:11:51 build_native_dpdk -- common/autobuild_common.sh@117 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb 00:01:38.576 10:11:51 build_native_dpdk -- common/autobuild_common.sh@118 -- $ make -j112 all SHARED=y EXTRA_CFLAGS=-fPIC 00:01:38.576 make -C lib 00:01:38.576 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:01:39.140 mkdir obj 00:01:39.399 nasm -MD obj/aes_keyexp_128.d -MT obj/aes_keyexp_128.o -o obj/aes_keyexp_128.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_128.asm 00:01:39.399 nasm -MD obj/aes_keyexp_192.d -MT obj/aes_keyexp_192.o -o obj/aes_keyexp_192.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_192.asm 00:01:39.399 nasm -MD obj/aes_keyexp_256.d -MT obj/aes_keyexp_256.o -o obj/aes_keyexp_256.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_keyexp_256.asm 00:01:39.399 nasm -MD obj/aes_cmac_subkey_gen.d -MT obj/aes_cmac_subkey_gen.o -o obj/aes_cmac_subkey_gen.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes_cmac_subkey_gen.asm 00:01:39.399 nasm -MD obj/save_xmms.d -MT obj/save_xmms.o -o obj/save_xmms.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/save_xmms.asm 00:01:39.399 nasm -MD obj/clear_regs_mem_fns.d -MT obj/clear_regs_mem_fns.o -o obj/clear_regs_mem_fns.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/clear_regs_mem_fns.asm 00:01:39.399 nasm -MD obj/const.d -MT obj/const.o -o obj/const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/const.asm 00:01:39.399 nasm -MD obj/aes128_ecbenc_x3.d -MT obj/aes128_ecbenc_x3.o -o obj/aes128_ecbenc_x3.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/aes128_ecbenc_x3.asm 00:01:39.399 nasm -MD obj/zuc_common.d -MT obj/zuc_common.o -o obj/zuc_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/zuc_common.asm 00:01:39.399 nasm -MD obj/wireless_common.d -MT obj/wireless_common.o -o obj/wireless_common.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/wireless_common.asm 00:01:39.399 nasm -MD obj/constant_lookup.d -MT obj/constant_lookup.o -o obj/constant_lookup.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/constant_lookup.asm 00:01:39.399 nasm -MD obj/crc32_refl_const.d -MT obj/crc32_refl_const.o -o obj/crc32_refl_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_refl_const.asm 00:01:39.399 nasm -MD obj/crc32_const.d -MT obj/crc32_const.o -o obj/crc32_const.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/crc32_const.asm 00:01:39.399 nasm -MD obj/poly1305.d -MT obj/poly1305.o -o obj/poly1305.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP x86_64/poly1305.asm 00:01:39.399 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/chacha20_poly1305.c -o obj/chacha20_poly1305.o 00:01:39.399 ld -r -z ibt -z shstk -o obj/save_xmms.o.tmp obj/save_xmms.o 00:01:39.399 nasm -MD obj/aes128_cbc_dec_by4_sse_no_aesni.d -MT obj/aes128_cbc_dec_by4_sse_no_aesni.o -o obj/aes128_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_dec_by4_sse_no_aesni.asm 00:01:39.399 ld -r -z ibt -z shstk -o obj/const.o.tmp obj/const.o 00:01:39.399 ld -r -z ibt -z shstk -o obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o 00:01:39.399 nasm -MD obj/aes192_cbc_dec_by4_sse_no_aesni.d -MT obj/aes192_cbc_dec_by4_sse_no_aesni.o -o obj/aes192_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cbc_dec_by4_sse_no_aesni.asm 00:01:39.399 ld -r -z ibt -z shstk -o obj/wireless_common.o.tmp obj/wireless_common.o 00:01:39.399 ld -r -z ibt -z shstk -o obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o 00:01:39.399 mv obj/save_xmms.o.tmp obj/save_xmms.o 00:01:39.399 nasm -MD obj/aes256_cbc_dec_by4_sse_no_aesni.d -MT obj/aes256_cbc_dec_by4_sse_no_aesni.o -o obj/aes256_cbc_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_dec_by4_sse_no_aesni.asm 00:01:39.399 nasm -MD obj/aes_cbc_enc_128_x4_no_aesni.d -MT obj/aes_cbc_enc_128_x4_no_aesni.o -o obj/aes_cbc_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_128_x4_no_aesni.asm 00:01:39.400 mv obj/const.o.tmp obj/const.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/crc32_const.o.tmp obj/crc32_const.o 00:01:39.400 mv obj/clear_regs_mem_fns.o.tmp obj/clear_regs_mem_fns.o 00:01:39.400 mv obj/crc32_refl_const.o.tmp obj/crc32_refl_const.o 00:01:39.400 mv obj/wireless_common.o.tmp obj/wireless_common.o 00:01:39.400 nasm -MD obj/aes_cbc_enc_192_x4_no_aesni.d -MT obj/aes_cbc_enc_192_x4_no_aesni.o -o obj/aes_cbc_enc_192_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_192_x4_no_aesni.asm 00:01:39.400 nasm -MD obj/aes_cbc_enc_256_x4_no_aesni.d -MT obj/aes_cbc_enc_256_x4_no_aesni.o -o obj/aes_cbc_enc_256_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbc_enc_256_x4_no_aesni.asm 00:01:39.400 mv obj/crc32_const.o.tmp obj/crc32_const.o 00:01:39.400 nasm -MD obj/aes128_cntr_by8_sse_no_aesni.d -MT obj/aes128_cntr_by8_sse_no_aesni.o -o obj/aes128_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes192_cntr_by8_sse_no_aesni.d -MT obj/aes192_cntr_by8_sse_no_aesni.o -o obj/aes192_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes192_cntr_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes256_cntr_by8_sse_no_aesni.d -MT obj/aes256_cntr_by8_sse_no_aesni.o -o obj/aes256_cntr_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes_ecb_by4_sse_no_aesni.d -MT obj/aes_ecb_by4_sse_no_aesni.o -o obj/aes_ecb_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_ecb_by4_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes128_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes128_cntr_ccm_by8_sse_no_aesni.o -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cntr_ccm_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes256_cntr_ccm_by8_sse_no_aesni.d -MT obj/aes256_cntr_ccm_by8_sse_no_aesni.o -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cntr_ccm_by8_sse_no_aesni.asm 00:01:39.400 ld -r -z ibt -z shstk -o obj/constant_lookup.o.tmp obj/constant_lookup.o 00:01:39.400 nasm -MD obj/pon_sse_no_aesni.d -MT obj/pon_sse_no_aesni.o -o obj/pon_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/pon_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/zuc_sse_no_aesni.d -MT obj/zuc_sse_no_aesni.o -o obj/zuc_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/zuc_sse_no_aesni.asm 00:01:39.400 mv obj/constant_lookup.o.tmp obj/constant_lookup.o 00:01:39.400 nasm -MD obj/aes_cfb_sse_no_aesni.d -MT obj/aes_cfb_sse_no_aesni.o -o obj/aes_cfb_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cfb_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes128_cbc_mac_x4_no_aesni.d -MT obj/aes128_cbc_mac_x4_no_aesni.o -o obj/aes128_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbc_mac_x4_no_aesni.asm 00:01:39.400 nasm -MD obj/aes256_cbc_mac_x4_no_aesni.d -MT obj/aes256_cbc_mac_x4_no_aesni.o -o obj/aes256_cbc_mac_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes256_cbc_mac_x4_no_aesni.asm 00:01:39.400 nasm -MD obj/aes_xcbc_mac_128_x4_no_aesni.d -MT obj/aes_xcbc_mac_128_x4_no_aesni.o -o obj/aes_xcbc_mac_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_xcbc_mac_128_x4_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_flush_sse_no_aesni.o -o obj/mb_mgr_aes_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_submit_sse_no_aesni.o -o obj/mb_mgr_aes_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_submit_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes192_flush_sse_no_aesni.d -MT obj/mb_mgr_aes192_flush_sse_no_aesni.o -o obj/mb_mgr_aes192_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes192_submit_sse_no_aesni.d -MT obj/mb_mgr_aes192_submit_sse_no_aesni.o -o obj/mb_mgr_aes192_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes192_submit_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes256_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes256_submit_sse_no_aesni.d -MT obj/mb_mgr_aes256_submit_sse_no_aesni.o -o obj/mb_mgr_aes256_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_submit_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.asm 00:01:39.400 ld -r -z ibt -z shstk -o obj/poly1305.o.tmp obj/poly1305.o 00:01:39.400 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.asm 00:01:39.400 mv obj/poly1305.o.tmp obj/poly1305.o 00:01:39.400 nasm -MD obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.d -MT obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes_xcbc_submit_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_zuc_submit_flush_sse_no_aesni.d -MT obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_zuc_submit_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/ethernet_fcs_sse_no_aesni.d -MT obj/ethernet_fcs_sse_no_aesni.o -o obj/ethernet_fcs_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/ethernet_fcs_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/crc16_x25_sse_no_aesni.d -MT obj/crc16_x25_sse_no_aesni.o -o obj/crc16_x25_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc16_x25_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/aes_cbcs_1_9_enc_128_x4_no_aesni.d -MT obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes_cbcs_1_9_enc_128_x4_no_aesni.asm 00:01:39.400 nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.d -MT obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/aes128_cbcs_1_9_dec_by4_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_submit_sse.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes128_cbcs_1_9_flush_sse.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/crc32_refl_by8_sse_no_aesni.d -MT obj/crc32_refl_by8_sse_no_aesni.o -o obj/crc32_refl_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_refl_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/crc32_by8_sse_no_aesni.d -MT obj/crc32_by8_sse_no_aesni.o -o obj/crc32_by8_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_by8_sse_no_aesni.asm 00:01:39.400 nasm -MD obj/crc32_sctp_sse_no_aesni.d -MT obj/crc32_sctp_sse_no_aesni.o -o obj/crc32_sctp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_sctp_sse_no_aesni.asm 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o 00:01:39.400 nasm -MD obj/crc32_lte_sse_no_aesni.d -MT obj/crc32_lte_sse_no_aesni.o -o obj/crc32_lte_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_lte_sse_no_aesni.asm 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o 00:01:39.400 mv obj/mb_mgr_aes_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_flush_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o 00:01:39.400 mv obj/ethernet_fcs_sse_no_aesni.o.tmp obj/ethernet_fcs_sse_no_aesni.o 00:01:39.400 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o 00:01:39.400 mv obj/crc16_x25_sse_no_aesni.o.tmp obj/crc16_x25_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_submit_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes192_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes192_flush_sse_no_aesni.o 00:01:39.401 nasm -MD obj/crc32_fp_sse_no_aesni.d -MT obj/crc32_fp_sse_no_aesni.o -o obj/crc32_fp_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_fp_sse_no_aesni.asm 00:01:39.401 mv obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes256_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_flush_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes192_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes192_submit_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes256_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes256_submit_sse_no_aesni.o 00:01:39.401 nasm -MD obj/crc32_iuup_sse_no_aesni.d -MT obj/crc32_iuup_sse_no_aesni.o -o obj/crc32_iuup_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_iuup_sse_no_aesni.asm 00:01:39.401 nasm -MD obj/crc32_wimax_sse_no_aesni.d -MT obj/crc32_wimax_sse_no_aesni.o -o obj/crc32_wimax_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/crc32_wimax_sse_no_aesni.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o 00:01:39.401 nasm -MD obj/gcm128_sse_no_aesni.d -MT obj/gcm128_sse_no_aesni.o -o obj/gcm128_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm128_sse_no_aesni.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o 00:01:39.401 nasm -MD obj/gcm192_sse_no_aesni.d -MT obj/gcm192_sse_no_aesni.o -o obj/gcm192_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm192_sse_no_aesni.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o 00:01:39.401 nasm -MD obj/gcm256_sse_no_aesni.d -MT obj/gcm256_sse_no_aesni.o -o obj/gcm256_sse_no_aesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/gcm256_sse_no_aesni.asm 00:01:39.401 mv obj/crc32_sctp_sse_no_aesni.o.tmp obj/crc32_sctp_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o 00:01:39.401 mv obj/crc32_fp_sse_no_aesni.o.tmp obj/crc32_fp_sse_no_aesni.o 00:01:39.401 mv obj/crc32_lte_sse_no_aesni.o.tmp obj/crc32_lte_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o 00:01:39.401 mv obj/crc32_iuup_sse_no_aesni.o.tmp obj/crc32_iuup_sse_no_aesni.o 00:01:39.401 nasm -MD obj/aes128_cbc_dec_by4_sse.d -MT obj/aes128_cbc_dec_by4_sse.o -o obj/aes128_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by4_sse.asm 00:01:39.401 nasm -MD obj/aes128_cbc_dec_by8_sse.d -MT obj/aes128_cbc_dec_by8_sse.o -o obj/aes128_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_dec_by8_sse.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o 00:01:39.401 mv obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o 00:01:39.401 mv obj/crc32_wimax_sse_no_aesni.o.tmp obj/crc32_wimax_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o 00:01:39.401 nasm -MD obj/aes192_cbc_dec_by4_sse.d -MT obj/aes192_cbc_dec_by4_sse.o -o obj/aes192_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by4_sse.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o 00:01:39.401 mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o 00:01:39.401 mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o 00:01:39.401 mv obj/aes_keyexp_128.o.tmp obj/aes_keyexp_128.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o 00:01:39.401 mv obj/aes128_cbc_dec_by4_sse.o.tmp obj/aes128_cbc_dec_by4_sse.o 00:01:39.401 nasm -MD obj/aes192_cbc_dec_by8_sse.d -MT obj/aes192_cbc_dec_by8_sse.o -o obj/aes192_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cbc_dec_by8_sse.asm 00:01:39.401 mv obj/aes_cmac_subkey_gen.o.tmp obj/aes_cmac_subkey_gen.o 00:01:39.401 mv obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o 00:01:39.401 mv obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o 00:01:39.401 nasm -MD obj/aes256_cbc_dec_by4_sse.d -MT obj/aes256_cbc_dec_by4_sse.o -o obj/aes256_cbc_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by4_sse.asm 00:01:39.401 mv obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o 00:01:39.401 nasm -MD obj/aes256_cbc_dec_by8_sse.d -MT obj/aes256_cbc_dec_by8_sse.o -o obj/aes256_cbc_dec_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_dec_by8_sse.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o 00:01:39.401 nasm -MD obj/aes_cbc_enc_128_x4.d -MT obj/aes_cbc_enc_128_x4.o -o obj/aes_cbc_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x4.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o 00:01:39.401 mv obj/aes128_ecbenc_x3.o.tmp obj/aes128_ecbenc_x3.o 00:01:39.401 mv obj/aes128_cbc_dec_by8_sse.o.tmp obj/aes128_cbc_dec_by8_sse.o 00:01:39.401 nasm -MD obj/aes_cbc_enc_192_x4.d -MT obj/aes_cbc_enc_192_x4.o -o obj/aes_cbc_enc_192_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x4.asm 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o 00:01:39.401 mv obj/aes192_cbc_dec_by4_sse.o.tmp obj/aes192_cbc_dec_by4_sse.o 00:01:39.401 mv obj/aes256_cbc_dec_by4_sse.o.tmp obj/aes256_cbc_dec_by4_sse.o 00:01:39.401 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o 00:01:39.401 mv obj/aes_keyexp_256.o.tmp obj/aes_keyexp_256.o 00:01:39.401 mv obj/aes192_cbc_dec_by8_sse.o.tmp obj/aes192_cbc_dec_by8_sse.o 00:01:39.401 mv obj/aes_cbc_enc_128_x4.o.tmp obj/aes_cbc_enc_128_x4.o 00:01:39.401 nasm -MD obj/aes_cbc_enc_256_x4.d -MT obj/aes_cbc_enc_256_x4.o -o obj/aes_cbc_enc_256_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x4.asm 00:01:39.401 mv obj/aes256_cbc_dec_by8_sse.o.tmp obj/aes256_cbc_dec_by8_sse.o 00:01:39.401 nasm -MD obj/aes_cbc_enc_128_x8_sse.d -MT obj/aes_cbc_enc_128_x8_sse.o -o obj/aes_cbc_enc_128_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_128_x8_sse.asm 00:01:39.401 nasm -MD obj/aes_cbc_enc_192_x8_sse.d -MT obj/aes_cbc_enc_192_x8_sse.o -o obj/aes_cbc_enc_192_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_192_x8_sse.asm 00:01:39.401 nasm -MD obj/aes_cbc_enc_256_x8_sse.d -MT obj/aes_cbc_enc_256_x8_sse.o -o obj/aes_cbc_enc_256_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbc_enc_256_x8_sse.asm 00:01:39.401 mv obj/aes_cbc_enc_192_x4.o.tmp obj/aes_cbc_enc_192_x4.o 00:01:39.401 nasm -MD obj/pon_sse.d -MT obj/pon_sse.o -o obj/pon_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/pon_sse.asm 00:01:39.401 nasm -MD obj/aes128_cntr_by8_sse.d -MT obj/aes128_cntr_by8_sse.o -o obj/aes128_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_by8_sse.asm 00:01:39.401 nasm -MD obj/aes192_cntr_by8_sse.d -MT obj/aes192_cntr_by8_sse.o -o obj/aes192_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes192_cntr_by8_sse.asm 00:01:39.401 nasm -MD obj/aes256_cntr_by8_sse.d -MT obj/aes256_cntr_by8_sse.o -o obj/aes256_cntr_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_by8_sse.asm 00:01:39.401 nasm -MD obj/aes_ecb_by4_sse.d -MT obj/aes_ecb_by4_sse.o -o obj/aes_ecb_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_ecb_by4_sse.asm 00:01:39.401 nasm -MD obj/aes128_cntr_ccm_by8_sse.d -MT obj/aes128_cntr_ccm_by8_sse.o -o obj/aes128_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cntr_ccm_by8_sse.asm 00:01:39.401 nasm -MD obj/aes256_cntr_ccm_by8_sse.d -MT obj/aes256_cntr_ccm_by8_sse.o -o obj/aes256_cntr_ccm_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cntr_ccm_by8_sse.asm 00:01:39.401 nasm -MD obj/aes_cfb_sse.d -MT obj/aes_cfb_sse.o -o obj/aes_cfb_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cfb_sse.asm 00:01:39.401 nasm -MD obj/aes128_cbc_mac_x4.d -MT obj/aes128_cbc_mac_x4.o -o obj/aes128_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x4.asm 00:01:39.401 nasm -MD obj/aes256_cbc_mac_x4.d -MT obj/aes256_cbc_mac_x4.o -o obj/aes256_cbc_mac_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x4.asm 00:01:39.401 nasm -MD obj/aes128_cbc_mac_x8_sse.d -MT obj/aes128_cbc_mac_x8_sse.o -o obj/aes128_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbc_mac_x8_sse.asm 00:01:39.402 nasm -MD obj/aes256_cbc_mac_x8_sse.d -MT obj/aes256_cbc_mac_x8_sse.o -o obj/aes256_cbc_mac_x8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes256_cbc_mac_x8_sse.asm 00:01:39.402 nasm -MD obj/aes_xcbc_mac_128_x4.d -MT obj/aes_xcbc_mac_128_x4.o -o obj/aes_xcbc_mac_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_xcbc_mac_128_x4.asm 00:01:39.402 nasm -MD obj/md5_x4x2_sse.d -MT obj/md5_x4x2_sse.o -o obj/md5_x4x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/md5_x4x2_sse.asm 00:01:39.402 nasm -MD obj/sha1_mult_sse.d -MT obj/sha1_mult_sse.o -o obj/sha1_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_mult_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o 00:01:39.402 nasm -MD obj/sha1_one_block_sse.d -MT obj/sha1_one_block_sse.o -o obj/sha1_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_one_block_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o 00:01:39.402 nasm -MD obj/sha224_one_block_sse.d -MT obj/sha224_one_block_sse.o -o obj/sha224_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha224_one_block_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o 00:01:39.402 mv obj/aes_cfb_sse_no_aesni.o.tmp obj/aes_cfb_sse_no_aesni.o 00:01:39.402 mv obj/aes_cbc_enc_256_x4.o.tmp obj/aes_cbc_enc_256_x4.o 00:01:39.402 nasm -MD obj/sha256_one_block_sse.d -MT obj/sha256_one_block_sse.o -o obj/sha256_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_one_block_sse.asm 00:01:39.402 nasm -MD obj/sha384_one_block_sse.d -MT obj/sha384_one_block_sse.o -o obj/sha384_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha384_one_block_sse.asm 00:01:39.402 mv obj/aes_cbc_enc_256_x8_sse.o.tmp obj/aes_cbc_enc_256_x8_sse.o 00:01:39.402 nasm -MD obj/sha512_one_block_sse.d -MT obj/sha512_one_block_sse.o -o obj/sha512_one_block_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_one_block_sse.asm 00:01:39.402 nasm -MD obj/sha512_x2_sse.d -MT obj/sha512_x2_sse.o -o obj/sha512_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha512_x2_sse.asm 00:01:39.402 nasm -MD obj/sha_256_mult_sse.d -MT obj/sha_256_mult_sse.o -o obj/sha_256_mult_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha_256_mult_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o 00:01:39.402 nasm -MD obj/sha1_ni_x2_sse.d -MT obj/sha1_ni_x2_sse.o -o obj/sha1_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha1_ni_x2_sse.asm 00:01:39.402 nasm -MD obj/sha256_ni_x2_sse.d -MT obj/sha256_ni_x2_sse.o -o obj/sha256_ni_x2_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/sha256_ni_x2_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o 00:01:39.402 mv obj/aes_cbc_enc_192_x8_sse.o.tmp obj/aes_cbc_enc_192_x8_sse.o 00:01:39.402 nasm -MD obj/zuc_sse.d -MT obj/zuc_sse.o -o obj/zuc_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o 00:01:39.402 mv obj/aes_cbc_enc_128_x8_sse.o.tmp obj/aes_cbc_enc_128_x8_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o 00:01:39.402 mv obj/aes256_cbc_mac_x8_sse.o.tmp obj/aes256_cbc_mac_x8_sse.o 00:01:39.402 nasm -MD obj/zuc_sse_gfni.d -MT obj/zuc_sse_gfni.o -o obj/zuc_sse_gfni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/zuc_sse_gfni.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o 00:01:39.402 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o 00:01:39.402 mv obj/sha256_one_block_sse.o.tmp obj/sha256_one_block_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o 00:01:39.402 nasm -MD obj/mb_mgr_aes_flush_sse.d -MT obj/mb_mgr_aes_flush_sse.o -o obj/mb_mgr_aes_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse.asm 00:01:39.402 mv obj/aes_cfb_sse.o.tmp obj/aes_cfb_sse.o 00:01:39.402 mv obj/sha384_one_block_sse.o.tmp obj/sha384_one_block_sse.o 00:01:39.402 mv obj/sha512_one_block_sse.o.tmp obj/sha512_one_block_sse.o 00:01:39.402 nasm -MD obj/mb_mgr_aes_submit_sse.d -MT obj/mb_mgr_aes_submit_sse.o -o obj/mb_mgr_aes_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o 00:01:39.402 nasm -MD obj/mb_mgr_aes192_flush_sse.d -MT obj/mb_mgr_aes192_flush_sse.o -o obj/mb_mgr_aes192_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o 00:01:39.402 mv obj/aes128_cbc_mac_x4.o.tmp obj/aes128_cbc_mac_x4.o 00:01:39.402 mv obj/aes256_cbc_mac_x4.o.tmp obj/aes256_cbc_mac_x4.o 00:01:39.402 mv obj/aes128_cbc_mac_x8_sse.o.tmp obj/aes128_cbc_mac_x8_sse.o 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o 00:01:39.402 mv obj/sha224_one_block_sse.o.tmp obj/sha224_one_block_sse.o 00:01:39.402 mv obj/aes_xcbc_mac_128_x4.o.tmp obj/aes_xcbc_mac_128_x4.o 00:01:39.402 nasm -MD obj/mb_mgr_aes192_submit_sse.d -MT obj/mb_mgr_aes192_submit_sse.o -o obj/mb_mgr_aes192_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse.asm 00:01:39.402 ld -r -z ibt -z shstk -o obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o 00:01:39.402 mv obj/aes_ecb_by4_sse.o.tmp obj/aes_ecb_by4_sse.o 00:01:39.402 mv obj/sha1_one_block_sse.o.tmp obj/sha1_one_block_sse.o 00:01:39.402 nasm -MD obj/mb_mgr_aes256_flush_sse.d -MT obj/mb_mgr_aes256_flush_sse.o -o obj/mb_mgr_aes256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes256_submit_sse.d -MT obj/mb_mgr_aes256_submit_sse.o -o obj/mb_mgr_aes256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse.asm 00:01:39.402 mv obj/sha1_ni_x2_sse.o.tmp obj/sha1_ni_x2_sse.o 00:01:39.402 nasm -MD obj/mb_mgr_aes_flush_sse_x8.d -MT obj/mb_mgr_aes_flush_sse_x8.o -o obj/mb_mgr_aes_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_flush_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes_submit_sse_x8.d -MT obj/mb_mgr_aes_submit_sse_x8.o -o obj/mb_mgr_aes_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_submit_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes192_flush_sse_x8.d -MT obj/mb_mgr_aes192_flush_sse_x8.o -o obj/mb_mgr_aes192_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_flush_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes192_submit_sse_x8.d -MT obj/mb_mgr_aes192_submit_sse_x8.o -o obj/mb_mgr_aes192_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes192_submit_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes256_flush_sse_x8.d -MT obj/mb_mgr_aes256_flush_sse_x8.o -o obj/mb_mgr_aes256_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_flush_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes256_submit_sse_x8.d -MT obj/mb_mgr_aes256_submit_sse_x8.o -o obj/mb_mgr_aes256_submit_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_submit_sse_x8.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse.o -o obj/mb_mgr_aes_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse.asm 00:01:39.402 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_cmac_submit_flush_sse_x8.asm 00:01:39.663 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_cmac_submit_flush_sse_x8.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse.asm 00:01:39.663 mv obj/aes128_cntr_ccm_by8_sse.o.tmp obj/aes128_cntr_ccm_by8_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.asm 00:01:39.663 nasm -MD obj/mb_mgr_aes_xcbc_flush_sse.d -MT obj/mb_mgr_aes_xcbc_flush_sse.o -o obj/mb_mgr_aes_xcbc_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_flush_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_aes_xcbc_submit_sse.d -MT obj/mb_mgr_aes_xcbc_submit_sse.o -o obj/mb_mgr_aes_xcbc_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_aes_xcbc_submit_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_hmac_md5_flush_sse.d -MT obj/mb_mgr_hmac_md5_flush_sse.o -o obj/mb_mgr_hmac_md5_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_flush_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_hmac_md5_submit_sse.d -MT obj/mb_mgr_hmac_md5_submit_sse.o -o obj/mb_mgr_hmac_md5_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_md5_submit_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o 00:01:39.663 mv obj/mb_mgr_aes_submit_sse.o.tmp obj/mb_mgr_aes_submit_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_hmac_flush_sse.d -MT obj/mb_mgr_hmac_flush_sse.o -o obj/mb_mgr_hmac_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o 00:01:39.663 mv obj/mb_mgr_aes_flush_sse.o.tmp obj/mb_mgr_aes_flush_sse.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o 00:01:39.663 mv obj/aes256_cntr_ccm_by8_sse.o.tmp obj/aes256_cntr_ccm_by8_sse.o 00:01:39.663 mv obj/sha_256_mult_sse.o.tmp obj/sha_256_mult_sse.o 00:01:39.663 mv obj/sha256_ni_x2_sse.o.tmp obj/sha256_ni_x2_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_hmac_submit_sse.d -MT obj/mb_mgr_hmac_submit_sse.o -o obj/mb_mgr_hmac_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_sse.asm 00:01:39.663 mv obj/mb_mgr_aes192_flush_sse.o.tmp obj/mb_mgr_aes192_flush_sse.o 00:01:39.663 mv obj/mb_mgr_aes192_submit_sse.o.tmp obj/mb_mgr_aes192_submit_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_hmac_sha_224_flush_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_sse.o -o obj/mb_mgr_hmac_sha_224_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_hmac_sha_224_submit_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_sse.o -o obj/mb_mgr_hmac_sha_224_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_hmac_sha_256_flush_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_sse.o -o obj/mb_mgr_hmac_sha_256_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_sse.asm 00:01:39.663 nasm -MD obj/mb_mgr_hmac_sha_256_submit_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_sse.o -o obj/mb_mgr_hmac_sha_256_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o 00:01:39.663 mv obj/sha1_mult_sse.o.tmp obj/sha1_mult_sse.o 00:01:39.663 nasm -MD obj/mb_mgr_hmac_sha_384_flush_sse.d -MT obj/mb_mgr_hmac_sha_384_flush_sse.o -o obj/mb_mgr_hmac_sha_384_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_flush_sse.asm 00:01:39.663 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o 00:01:39.663 mv obj/crc32_by8_sse_no_aesni.o.tmp obj/crc32_by8_sse_no_aesni.o 00:01:39.663 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o 00:01:39.664 mv obj/sha512_x2_sse.o.tmp obj/sha512_x2_sse.o 00:01:39.664 mv obj/mb_mgr_aes192_submit_sse_x8.o.tmp obj/mb_mgr_aes192_submit_sse_x8.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_384_submit_sse.d -MT obj/mb_mgr_hmac_sha_384_submit_sse.o -o obj/mb_mgr_hmac_sha_384_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_384_submit_sse.asm 00:01:39.664 mv obj/aes128_cntr_by8_sse.o.tmp obj/aes128_cntr_by8_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_512_flush_sse.d -MT obj/mb_mgr_hmac_sha_512_flush_sse.o -o obj/mb_mgr_hmac_sha_512_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_flush_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o 00:01:39.664 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o 00:01:39.664 mv obj/mb_mgr_aes_submit_sse_x8.o.tmp obj/mb_mgr_aes_submit_sse_x8.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_512_submit_sse.d -MT obj/mb_mgr_hmac_sha_512_submit_sse.o -o obj/mb_mgr_hmac_sha_512_submit_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_512_submit_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o 00:01:39.664 mv obj/mb_mgr_aes192_flush_sse_x8.o.tmp obj/mb_mgr_aes192_flush_sse_x8.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_flush_ni_sse.d -MT obj/mb_mgr_hmac_flush_ni_sse.o -o obj/mb_mgr_hmac_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_flush_ni_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o 00:01:39.664 mv obj/mb_mgr_aes256_submit_sse.o.tmp obj/mb_mgr_aes256_submit_sse.o 00:01:39.664 mv obj/mb_mgr_aes256_flush_sse.o.tmp obj/mb_mgr_aes256_flush_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_submit_ni_sse.d -MT obj/mb_mgr_hmac_submit_ni_sse.o -o obj/mb_mgr_hmac_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_submit_ni_sse.asm 00:01:39.664 mv obj/crc32_refl_by8_sse_no_aesni.o.tmp obj/crc32_refl_by8_sse_no_aesni.o 00:01:39.664 mv obj/mb_mgr_aes_flush_sse_x8.o.tmp obj/mb_mgr_aes_flush_sse_x8.o 00:01:39.664 mv obj/mb_mgr_aes256_submit_sse_x8.o.tmp obj/mb_mgr_aes256_submit_sse_x8.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o 00:01:39.664 mv obj/aes192_cntr_by8_sse.o.tmp obj/aes192_cntr_by8_sse.o 00:01:39.664 mv obj/mb_mgr_aes_xcbc_flush_sse.o.tmp obj/mb_mgr_aes_xcbc_flush_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_224_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_flush_ni_sse.asm 00:01:39.664 mv obj/aes_keyexp_192.o.tmp obj/aes_keyexp_192.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_224_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_224_submit_ni_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o 00:01:39.664 mv obj/mb_mgr_aes256_flush_sse_x8.o.tmp obj/mb_mgr_aes256_flush_sse_x8.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_256_flush_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_flush_ni_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_hmac_sha_256_submit_ni_sse.d -MT obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_hmac_sha_256_submit_ni_sse.asm 00:01:39.664 mv obj/mb_mgr_hmac_flush_sse.o.tmp obj/mb_mgr_hmac_flush_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_zuc_submit_flush_sse.d -MT obj/mb_mgr_zuc_submit_flush_sse.o -o obj/mb_mgr_zuc_submit_flush_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_sse.asm 00:01:39.664 mv obj/mb_mgr_hmac_md5_submit_sse.o.tmp obj/mb_mgr_hmac_md5_submit_sse.o 00:01:39.664 nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_sse.d -MT obj/mb_mgr_zuc_submit_flush_gfni_sse.o -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/mb_mgr_zuc_submit_flush_gfni_sse.asm 00:01:39.664 nasm -MD obj/ethernet_fcs_sse.d -MT obj/ethernet_fcs_sse.o -o obj/ethernet_fcs_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/ethernet_fcs_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o 00:01:39.664 nasm -MD obj/crc16_x25_sse.d -MT obj/crc16_x25_sse.o -o obj/crc16_x25_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc16_x25_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o 00:01:39.664 nasm -MD obj/crc32_sctp_sse.d -MT obj/crc32_sctp_sse.o -o obj/crc32_sctp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_sctp_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_md5_flush_sse.o.tmp obj/mb_mgr_hmac_md5_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o 00:01:39.664 mv obj/mb_mgr_aes_xcbc_submit_sse.o.tmp obj/mb_mgr_aes_xcbc_submit_sse.o 00:01:39.664 mv obj/aes256_cntr_by8_sse.o.tmp obj/aes256_cntr_by8_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o 00:01:39.664 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o 00:01:39.664 mv obj/mb_mgr_hmac_sha_256_flush_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_sse.o 00:01:39.664 nasm -MD obj/aes_cbcs_1_9_enc_128_x4.d -MT obj/aes_cbcs_1_9_enc_128_x4.o -o obj/aes_cbcs_1_9_enc_128_x4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes_cbcs_1_9_enc_128_x4.asm 00:01:39.664 mv obj/mb_mgr_aes_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse.o 00:01:39.664 nasm -MD obj/aes128_cbcs_1_9_dec_by4_sse.d -MT obj/aes128_cbcs_1_9_dec_by4_sse.o -o obj/aes128_cbcs_1_9_dec_by4_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/aes128_cbcs_1_9_dec_by4_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o 00:01:39.664 mv obj/ethernet_fcs_sse.o.tmp obj/ethernet_fcs_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_sha_224_flush_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_sse.o 00:01:39.664 nasm -MD obj/crc32_refl_by8_sse.d -MT obj/crc32_refl_by8_sse.o -o obj/crc32_refl_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_refl_by8_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o 00:01:39.664 mv obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o 00:01:39.664 nasm -MD obj/crc32_by8_sse.d -MT obj/crc32_by8_sse.o -o obj/crc32_by8_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_by8_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o 00:01:39.664 mv obj/crc16_x25_sse.o.tmp obj/crc16_x25_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_submit_sse.o.tmp obj/mb_mgr_hmac_submit_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_flush_ni_sse.o.tmp obj/mb_mgr_hmac_flush_ni_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o 00:01:39.664 nasm -MD obj/crc32_lte_sse.d -MT obj/crc32_lte_sse.o -o obj/crc32_lte_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_lte_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o 00:01:39.664 mv obj/crc32_sctp_sse.o.tmp obj/crc32_sctp_sse.o 00:01:39.664 nasm -MD obj/crc32_fp_sse.d -MT obj/crc32_fp_sse.o -o obj/crc32_fp_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_fp_sse.asm 00:01:39.664 mv obj/mb_mgr_hmac_sha_384_flush_sse.o.tmp obj/mb_mgr_hmac_sha_384_flush_sse.o 00:01:39.664 nasm -MD obj/crc32_iuup_sse.d -MT obj/crc32_iuup_sse.o -o obj/crc32_iuup_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_iuup_sse.asm 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_sha_512_flush_sse.o.tmp obj/mb_mgr_hmac_sha_512_flush_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_sha_384_submit_sse.o.tmp obj/mb_mgr_hmac_sha_384_submit_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/pon_sse.o.tmp obj/pon_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o 00:01:39.664 mv obj/aes128_cbcs_1_9_dec_by4_sse.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o 00:01:39.664 mv obj/mb_mgr_aes256_cmac_submit_flush_sse.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse.o 00:01:39.664 mv obj/mb_mgr_hmac_sha_224_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_flush_ni_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o 00:01:39.664 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o 00:01:39.665 mv obj/pon_sse.o.tmp obj/pon_sse.o 00:01:39.665 mv obj/mb_mgr_hmac_sha_256_flush_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_flush_ni_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o 00:01:39.665 mv obj/crc32_lte_sse.o.tmp obj/crc32_lte_sse.o 00:01:39.665 mv obj/md5_x4x2_sse.o.tmp obj/md5_x4x2_sse.o 00:01:39.665 mv obj/crc32_refl_by8_sse.o.tmp obj/crc32_refl_by8_sse.o 00:01:39.665 mv obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o 00:01:39.665 mv obj/crc32_iuup_sse.o.tmp obj/crc32_iuup_sse.o 00:01:39.665 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o 00:01:39.665 mv obj/mb_mgr_hmac_sha_224_submit_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_sse.o 00:01:39.665 mv obj/mb_mgr_hmac_sha_512_submit_sse.o.tmp obj/mb_mgr_hmac_sha_512_submit_sse.o 00:01:39.665 mv obj/mb_mgr_hmac_sha_224_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_224_submit_ni_sse.o 00:01:39.665 mv obj/crc32_by8_sse.o.tmp obj/crc32_by8_sse.o 00:01:39.665 mv obj/crc32_fp_sse.o.tmp obj/crc32_fp_sse.o 00:01:39.665 mv obj/mb_mgr_hmac_sha_256_submit_ni_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_ni_sse.o 00:01:39.665 mv obj/aes_cbcs_1_9_enc_128_x4.o.tmp obj/aes_cbcs_1_9_enc_128_x4.o 00:01:39.665 nasm -MD obj/crc32_wimax_sse.d -MT obj/crc32_wimax_sse.o -o obj/crc32_wimax_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/crc32_wimax_sse.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o 00:01:39.665 nasm -MD obj/chacha20_sse.d -MT obj/chacha20_sse.o -o obj/chacha20_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/chacha20_sse.asm 00:01:39.665 nasm -MD obj/memcpy_sse.d -MT obj/memcpy_sse.o -o obj/memcpy_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/memcpy_sse.asm 00:01:39.665 nasm -MD obj/gcm128_sse.d -MT obj/gcm128_sse.o -o obj/gcm128_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm128_sse.asm 00:01:39.665 mv obj/mb_mgr_hmac_sha_256_submit_sse.o.tmp obj/mb_mgr_hmac_sha_256_submit_sse.o 00:01:39.665 nasm -MD obj/gcm192_sse.d -MT obj/gcm192_sse.o -o obj/gcm192_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm192_sse.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o 00:01:39.665 nasm -MD obj/gcm256_sse.d -MT obj/gcm256_sse.o -o obj/gcm256_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/gcm256_sse.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o 00:01:39.665 nasm -MD obj/aes_cbc_enc_128_x8.d -MT obj/aes_cbc_enc_128_x8.o -o obj/aes_cbc_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_128_x8.asm 00:01:39.665 mv obj/mb_mgr_hmac_submit_ni_sse.o.tmp obj/mb_mgr_hmac_submit_ni_sse.o 00:01:39.665 mv obj/crc32_wimax_sse.o.tmp obj/crc32_wimax_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o 00:01:39.665 nasm -MD obj/aes_cbc_enc_192_x8.d -MT obj/aes_cbc_enc_192_x8.o -o obj/aes_cbc_enc_192_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_192_x8.asm 00:01:39.665 nasm -MD obj/aes_cbc_enc_256_x8.d -MT obj/aes_cbc_enc_256_x8.o -o obj/aes_cbc_enc_256_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbc_enc_256_x8.asm 00:01:39.665 nasm -MD obj/aes128_cbc_dec_by8_avx.d -MT obj/aes128_cbc_dec_by8_avx.o -o obj/aes128_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_dec_by8_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/memcpy_sse.o.tmp obj/memcpy_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o 00:01:39.665 mv obj/aes_cbc_enc_128_x4_no_aesni.o.tmp obj/aes_cbc_enc_128_x4_no_aesni.o 00:01:39.665 nasm -MD obj/aes192_cbc_dec_by8_avx.d -MT obj/aes192_cbc_dec_by8_avx.o -o obj/aes192_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cbc_dec_by8_avx.asm 00:01:39.665 nasm -MD obj/aes256_cbc_dec_by8_avx.d -MT obj/aes256_cbc_dec_by8_avx.o -o obj/aes256_cbc_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_dec_by8_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o 00:01:39.665 mv obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o 00:01:39.665 nasm -MD obj/pon_avx.d -MT obj/pon_avx.o -o obj/pon_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/pon_avx.asm 00:01:39.665 mv obj/memcpy_sse.o.tmp obj/memcpy_sse.o 00:01:39.665 nasm -MD obj/aes128_cntr_by8_avx.d -MT obj/aes128_cntr_by8_avx.o -o obj/aes128_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_by8_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o 00:01:39.665 nasm -MD obj/aes192_cntr_by8_avx.d -MT obj/aes192_cntr_by8_avx.o -o obj/aes192_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes192_cntr_by8_avx.asm 00:01:39.665 nasm -MD obj/aes256_cntr_by8_avx.d -MT obj/aes256_cntr_by8_avx.o -o obj/aes256_cntr_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_by8_avx.asm 00:01:39.665 mv obj/aes128_cbc_mac_x4_no_aesni.o.tmp obj/aes128_cbc_mac_x4_no_aesni.o 00:01:39.665 nasm -MD obj/aes128_cntr_ccm_by8_avx.d -MT obj/aes128_cntr_ccm_by8_avx.o -o obj/aes128_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cntr_ccm_by8_avx.asm 00:01:39.665 nasm -MD obj/aes256_cntr_ccm_by8_avx.d -MT obj/aes256_cntr_ccm_by8_avx.o -o obj/aes256_cntr_ccm_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cntr_ccm_by8_avx.asm 00:01:39.665 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o 00:01:39.665 nasm -MD obj/aes_ecb_by4_avx.d -MT obj/aes_ecb_by4_avx.o -o obj/aes_ecb_by4_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_ecb_by4_avx.asm 00:01:39.665 nasm -MD obj/aes_cfb_avx.d -MT obj/aes_cfb_avx.o -o obj/aes_cfb_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cfb_avx.asm 00:01:39.665 nasm -MD obj/aes128_cbc_mac_x8.d -MT obj/aes128_cbc_mac_x8.o -o obj/aes128_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbc_mac_x8.asm 00:01:39.665 nasm -MD obj/aes256_cbc_mac_x8.d -MT obj/aes256_cbc_mac_x8.o -o obj/aes256_cbc_mac_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes256_cbc_mac_x8.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o 00:01:39.665 nasm -MD obj/aes_xcbc_mac_128_x8.d -MT obj/aes_xcbc_mac_128_x8.o -o obj/aes_xcbc_mac_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_xcbc_mac_128_x8.asm 00:01:39.665 nasm -MD obj/md5_x4x2_avx.d -MT obj/md5_x4x2_avx.o -o obj/md5_x4x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/md5_x4x2_avx.asm 00:01:39.665 nasm -MD obj/sha1_mult_avx.d -MT obj/sha1_mult_avx.o -o obj/sha1_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_mult_avx.asm 00:01:39.665 mv obj/aes_xcbc_mac_128_x4_no_aesni.o.tmp obj/aes_xcbc_mac_128_x4_no_aesni.o 00:01:39.665 nasm -MD obj/sha1_one_block_avx.d -MT obj/sha1_one_block_avx.o -o obj/sha1_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha1_one_block_avx.asm 00:01:39.665 nasm -MD obj/sha224_one_block_avx.d -MT obj/sha224_one_block_avx.o -o obj/sha224_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha224_one_block_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o 00:01:39.665 nasm -MD obj/sha256_one_block_avx.d -MT obj/sha256_one_block_avx.o -o obj/sha256_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha256_one_block_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o 00:01:39.665 nasm -MD obj/sha_256_mult_avx.d -MT obj/sha_256_mult_avx.o -o obj/sha_256_mult_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha_256_mult_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o 00:01:39.665 mv obj/aes_cbc_enc_256_x8.o.tmp obj/aes_cbc_enc_256_x8.o 00:01:39.665 nasm -MD obj/sha384_one_block_avx.d -MT obj/sha384_one_block_avx.o -o obj/sha384_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha384_one_block_avx.asm 00:01:39.665 mv obj/mb_mgr_zuc_submit_flush_sse.o.tmp obj/mb_mgr_zuc_submit_flush_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o 00:01:39.665 nasm -MD obj/sha512_one_block_avx.d -MT obj/sha512_one_block_avx.o -o obj/sha512_one_block_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_one_block_avx.asm 00:01:39.665 nasm -MD obj/sha512_x2_avx.d -MT obj/sha512_x2_avx.o -o obj/sha512_x2_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/sha512_x2_avx.asm 00:01:39.665 mv obj/mb_mgr_zuc_submit_flush_gfni_sse.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_sse.o 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o 00:01:39.665 mv obj/aes_cbc_enc_128_x8.o.tmp obj/aes_cbc_enc_128_x8.o 00:01:39.665 mv obj/aes192_cbc_dec_by8_avx.o.tmp obj/aes192_cbc_dec_by8_avx.o 00:01:39.665 nasm -MD obj/zuc_avx.d -MT obj/zuc_avx.o -o obj/zuc_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/zuc_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o 00:01:39.665 nasm -MD obj/mb_mgr_aes_flush_avx.d -MT obj/mb_mgr_aes_flush_avx.o -o obj/mb_mgr_aes_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_flush_avx.asm 00:01:39.665 mv obj/aes_cbc_enc_192_x8.o.tmp obj/aes_cbc_enc_192_x8.o 00:01:39.665 nasm -MD obj/mb_mgr_aes_submit_avx.d -MT obj/mb_mgr_aes_submit_avx.o -o obj/mb_mgr_aes_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_submit_avx.asm 00:01:39.665 mv obj/aes128_cbc_dec_by8_avx.o.tmp obj/aes128_cbc_dec_by8_avx.o 00:01:39.665 nasm -MD obj/mb_mgr_aes192_flush_avx.d -MT obj/mb_mgr_aes192_flush_avx.o -o obj/mb_mgr_aes192_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_flush_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o 00:01:39.665 nasm -MD obj/mb_mgr_aes192_submit_avx.d -MT obj/mb_mgr_aes192_submit_avx.o -o obj/mb_mgr_aes192_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes192_submit_avx.asm 00:01:39.665 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes256_flush_avx.d -MT obj/mb_mgr_aes256_flush_avx.o -o obj/mb_mgr_aes256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_flush_avx.asm 00:01:39.666 mv obj/sha256_one_block_avx.o.tmp obj/sha256_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes256_submit_avx.d -MT obj/mb_mgr_aes256_submit_avx.o -o obj/mb_mgr_aes256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_submit_avx.asm 00:01:39.666 mv obj/aes256_cbc_dec_by8_avx.o.tmp obj/aes256_cbc_dec_by8_avx.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o 00:01:39.666 mv obj/sha1_one_block_avx.o.tmp obj/sha1_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes_cmac_submit_flush_avx.o -o obj/mb_mgr_aes_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_cmac_submit_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o 00:01:39.666 mv obj/aes_cfb_avx.o.tmp obj/aes_cfb_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_avx.d -MT obj/mb_mgr_aes256_cmac_submit_flush_avx.o -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_cmac_submit_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o 00:01:39.666 mv obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o.tmp obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o 00:01:39.666 mv obj/aes_cbc_enc_256_x4_no_aesni.o.tmp obj/aes_cbc_enc_256_x4_no_aesni.o 00:01:39.666 mv obj/sha512_one_block_avx.o.tmp obj/sha512_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_ccm_auth_submit_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes256_ccm_auth_submit_flush_avx.asm 00:01:39.666 nasm -MD obj/mb_mgr_aes_xcbc_flush_avx.d -MT obj/mb_mgr_aes_xcbc_flush_avx.o -o obj/mb_mgr_aes_xcbc_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o 00:01:39.666 mv obj/aes_ecb_by4_avx.o.tmp obj/aes_ecb_by4_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes_xcbc_submit_avx.d -MT obj/mb_mgr_aes_xcbc_submit_avx.o -o obj/mb_mgr_aes_xcbc_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes_xcbc_submit_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o 00:01:39.666 mv obj/sha224_one_block_avx.o.tmp obj/sha224_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_md5_flush_avx.d -MT obj/mb_mgr_hmac_md5_flush_avx.o -o obj/mb_mgr_hmac_md5_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o 00:01:39.666 mv obj/aes128_cbc_mac_x8.o.tmp obj/aes128_cbc_mac_x8.o 00:01:39.666 mv obj/aes256_cbc_mac_x8.o.tmp obj/aes256_cbc_mac_x8.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_md5_submit_avx.d -MT obj/mb_mgr_hmac_md5_submit_avx.o -o obj/mb_mgr_hmac_md5_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_md5_submit_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_flush_avx.d -MT obj/mb_mgr_hmac_flush_avx.o -o obj/mb_mgr_hmac_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_flush_avx.asm 00:01:39.666 mv obj/aes_xcbc_mac_128_x8.o.tmp obj/aes_xcbc_mac_128_x8.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_submit_avx.d -MT obj/mb_mgr_hmac_submit_avx.o -o obj/mb_mgr_hmac_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_submit_avx.asm 00:01:39.666 mv obj/aes_cbc_enc_192_x4_no_aesni.o.tmp obj/aes_cbc_enc_192_x4_no_aesni.o 00:01:39.666 mv obj/sha384_one_block_avx.o.tmp obj/sha384_one_block_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx.d -MT obj/mb_mgr_hmac_sha_224_flush_avx.o -o obj/mb_mgr_hmac_sha_224_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx.d -MT obj/mb_mgr_hmac_sha_224_submit_avx.o -o obj/mb_mgr_hmac_sha_224_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_224_submit_avx.asm 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx.d -MT obj/mb_mgr_hmac_sha_256_flush_avx.o -o obj/mb_mgr_hmac_sha_256_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx.d -MT obj/mb_mgr_hmac_sha_256_submit_avx.o -o obj/mb_mgr_hmac_sha_256_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_256_submit_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o 00:01:39.666 mv obj/mb_mgr_aes_flush_avx.o.tmp obj/mb_mgr_aes_flush_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx.d -MT obj/mb_mgr_hmac_sha_384_flush_avx.o -o obj/mb_mgr_hmac_sha_384_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o 00:01:39.666 mv obj/mb_mgr_aes_submit_avx.o.tmp obj/mb_mgr_aes_submit_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx.d -MT obj/mb_mgr_hmac_sha_384_submit_avx.o -o obj/mb_mgr_hmac_sha_384_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_384_submit_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o 00:01:39.666 mv obj/aes256_cntr_ccm_by8_avx.o.tmp obj/aes256_cntr_ccm_by8_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx.d -MT obj/mb_mgr_hmac_sha_512_flush_avx.o -o obj/mb_mgr_hmac_sha_512_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o 00:01:39.666 mv obj/mb_mgr_aes192_submit_avx.o.tmp obj/mb_mgr_aes192_submit_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx.d -MT obj/mb_mgr_hmac_sha_512_submit_avx.o -o obj/mb_mgr_hmac_sha_512_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_hmac_sha_512_submit_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o 00:01:39.666 mv obj/mb_mgr_aes256_flush_avx.o.tmp obj/mb_mgr_aes256_flush_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_zuc_submit_flush_avx.d -MT obj/mb_mgr_zuc_submit_flush_avx.o -o obj/mb_mgr_zuc_submit_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_zuc_submit_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o 00:01:39.666 mv obj/mb_mgr_aes256_submit_avx.o.tmp obj/mb_mgr_aes256_submit_avx.o 00:01:39.666 nasm -MD obj/ethernet_fcs_avx.d -MT obj/ethernet_fcs_avx.o -o obj/ethernet_fcs_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/ethernet_fcs_avx.asm 00:01:39.666 mv obj/mb_mgr_aes192_flush_avx.o.tmp obj/mb_mgr_aes192_flush_avx.o 00:01:39.666 nasm -MD obj/crc16_x25_avx.d -MT obj/crc16_x25_avx.o -o obj/crc16_x25_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc16_x25_avx.asm 00:01:39.666 nasm -MD obj/aes_cbcs_1_9_enc_128_x8.d -MT obj/aes_cbcs_1_9_enc_128_x8.o -o obj/aes_cbcs_1_9_enc_128_x8.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes_cbcs_1_9_enc_128_x8.asm 00:01:39.666 nasm -MD obj/aes128_cbcs_1_9_dec_by8_avx.d -MT obj/aes128_cbcs_1_9_dec_by8_avx.o -o obj/aes128_cbcs_1_9_dec_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/aes128_cbcs_1_9_dec_by8_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o 00:01:39.666 mv obj/aes128_cntr_ccm_by8_avx.o.tmp obj/aes128_cntr_ccm_by8_avx.o 00:01:39.666 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_submit_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_submit_avx.asm 00:01:39.666 nasm -MD obj/mb_mgr_aes128_cbcs_1_9_flush_avx.d -MT obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/mb_mgr_aes128_cbcs_1_9_flush_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o 00:01:39.666 nasm -MD obj/crc32_refl_by8_avx.d -MT obj/crc32_refl_by8_avx.o -o obj/crc32_refl_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_refl_by8_avx.asm 00:01:39.666 mv obj/mb_mgr_aes_xcbc_flush_avx.o.tmp obj/mb_mgr_aes_xcbc_flush_avx.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o 00:01:39.666 mv obj/ethernet_fcs_avx.o.tmp obj/ethernet_fcs_avx.o 00:01:39.666 mv obj/crc16_x25_avx.o.tmp obj/crc16_x25_avx.o 00:01:39.666 nasm -MD obj/crc32_by8_avx.d -MT obj/crc32_by8_avx.o -o obj/crc32_by8_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_by8_avx.asm 00:01:39.666 ld -r -z ibt -z shstk -o obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o 00:01:39.666 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o 00:01:39.666 mv obj/mb_mgr_hmac_md5_flush_avx.o.tmp obj/mb_mgr_hmac_md5_flush_avx.o 00:01:39.666 nasm -MD obj/crc32_sctp_avx.d -MT obj/crc32_sctp_avx.o -o obj/crc32_sctp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_sctp_avx.asm 00:01:39.666 nasm -MD obj/crc32_lte_avx.d -MT obj/crc32_lte_avx.o -o obj/crc32_lte_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_lte_avx.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o 00:01:39.667 mv obj/sha_256_mult_avx.o.tmp obj/sha_256_mult_avx.o 00:01:39.667 mv obj/mb_mgr_aes_xcbc_submit_avx.o.tmp obj/mb_mgr_aes_xcbc_submit_avx.o 00:01:39.667 nasm -MD obj/crc32_fp_avx.d -MT obj/crc32_fp_avx.o -o obj/crc32_fp_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_fp_avx.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o 00:01:39.667 mv obj/sha1_mult_avx.o.tmp obj/sha1_mult_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o 00:01:39.667 nasm -MD obj/crc32_iuup_avx.d -MT obj/crc32_iuup_avx.o -o obj/crc32_iuup_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_iuup_avx.asm 00:01:39.667 nasm -MD obj/crc32_wimax_avx.d -MT obj/crc32_wimax_avx.o -o obj/crc32_wimax_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/crc32_wimax_avx.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_md5_submit_avx.o.tmp obj/mb_mgr_hmac_md5_submit_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_224_flush_avx.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_flush_avx.o.tmp obj/mb_mgr_hmac_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o 00:01:39.667 mv obj/crc32_lte_avx.o.tmp obj/crc32_lte_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o 00:01:39.667 nasm -MD obj/chacha20_avx.d -MT obj/chacha20_avx.o -o obj/chacha20_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/chacha20_avx.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o 00:01:39.667 mv obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o 00:01:39.667 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_512_flush_avx.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx.o 00:01:39.667 mv obj/crc32_fp_avx.o.tmp obj/crc32_fp_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_256_submit_avx.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx.o 00:01:39.667 mv obj/crc32_by8_avx.o.tmp obj/crc32_by8_avx.o 00:01:39.667 mv obj/mb_mgr_aes_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes_cmac_submit_flush_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_submit_avx.o.tmp obj/mb_mgr_hmac_submit_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_224_submit_avx.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx.o 00:01:39.667 mv obj/aes128_cbcs_1_9_dec_by8_avx.o.tmp obj/aes128_cbcs_1_9_dec_by8_avx.o 00:01:39.667 mv obj/crc32_sctp_avx.o.tmp obj/crc32_sctp_avx.o 00:01:39.667 mv obj/crc32_iuup_avx.o.tmp obj/crc32_iuup_avx.o 00:01:39.667 mv obj/crc32_wimax_avx.o.tmp obj/crc32_wimax_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_256_flush_avx.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx.o 00:01:39.667 mv obj/mb_mgr_aes256_cmac_submit_flush_avx.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_avx.o 00:01:39.667 mv obj/aes_cbcs_1_9_enc_128_x8.o.tmp obj/aes_cbcs_1_9_enc_128_x8.o 00:01:39.667 mv obj/crc32_refl_by8_avx.o.tmp obj/crc32_refl_by8_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_384_flush_avx.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx.o 00:01:39.667 mv obj/aes192_cntr_by8_avx.o.tmp obj/aes192_cntr_by8_avx.o 00:01:39.667 nasm -MD obj/memcpy_avx.d -MT obj/memcpy_avx.o -o obj/memcpy_avx.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/memcpy_avx.asm 00:01:39.667 nasm -MD obj/gcm128_avx_gen2.d -MT obj/gcm128_avx_gen2.o -o obj/gcm128_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm128_avx_gen2.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o 00:01:39.667 mv obj/aes128_cntr_by8_avx.o.tmp obj/aes128_cntr_by8_avx.o 00:01:39.667 mv obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o 00:01:39.667 nasm -MD obj/gcm192_avx_gen2.d -MT obj/gcm192_avx_gen2.o -o obj/gcm192_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm192_avx_gen2.asm 00:01:39.667 nasm -MD obj/gcm256_avx_gen2.d -MT obj/gcm256_avx_gen2.o -o obj/gcm256_avx_gen2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx/gcm256_avx_gen2.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_512_submit_avx.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx.o 00:01:39.667 nasm -MD obj/md5_x8x2_avx2.d -MT obj/md5_x8x2_avx2.o -o obj/md5_x8x2_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/md5_x8x2_avx2.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/memcpy_avx.o.tmp obj/memcpy_avx.o 00:01:39.667 mv obj/mb_mgr_hmac_sha_384_submit_avx.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx.o 00:01:39.667 mv obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o.tmp obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o 00:01:39.667 nasm -MD obj/sha1_x8_avx2.d -MT obj/sha1_x8_avx2.o -o obj/sha1_x8_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha1_x8_avx2.asm 00:01:39.667 mv obj/aes256_cbc_mac_x4_no_aesni.o.tmp obj/aes256_cbc_mac_x4_no_aesni.o 00:01:39.667 nasm -MD obj/sha256_oct_avx2.d -MT obj/sha256_oct_avx2.o -o obj/sha256_oct_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha256_oct_avx2.asm 00:01:39.667 mv obj/memcpy_avx.o.tmp obj/memcpy_avx.o 00:01:39.667 nasm -MD obj/sha512_x4_avx2.d -MT obj/sha512_x4_avx2.o -o obj/sha512_x4_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/sha512_x4_avx2.asm 00:01:39.667 nasm -MD obj/zuc_avx2.d -MT obj/zuc_avx2.o -o obj/zuc_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/zuc_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_md5_flush_avx2.d -MT obj/mb_mgr_hmac_md5_flush_avx2.o -o obj/mb_mgr_hmac_md5_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_md5_submit_avx2.d -MT obj/mb_mgr_hmac_md5_submit_avx2.o -o obj/mb_mgr_hmac_md5_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_md5_submit_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_flush_avx2.d -MT obj/mb_mgr_hmac_flush_avx2.o -o obj/mb_mgr_hmac_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_submit_avx2.d -MT obj/mb_mgr_hmac_submit_avx2.o -o obj/mb_mgr_hmac_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_submit_avx2.asm 00:01:39.667 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx2.d -MT obj/mb_mgr_hmac_sha_224_flush_avx2.o -o obj/mb_mgr_hmac_sha_224_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx2.d -MT obj/mb_mgr_hmac_sha_224_submit_avx2.o -o obj/mb_mgr_hmac_sha_224_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_224_submit_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx2.d -MT obj/mb_mgr_hmac_sha_256_flush_avx2.o -o obj/mb_mgr_hmac_sha_256_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx2.d -MT obj/mb_mgr_hmac_sha_256_submit_avx2.o -o obj/mb_mgr_hmac_sha_256_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_256_submit_avx2.asm 00:01:39.667 mv obj/aes256_cntr_by8_avx.o.tmp obj/aes256_cntr_by8_avx.o 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx2.d -MT obj/mb_mgr_hmac_sha_384_flush_avx2.o -o obj/mb_mgr_hmac_sha_384_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx2.d -MT obj/mb_mgr_hmac_sha_384_submit_avx2.o -o obj/mb_mgr_hmac_sha_384_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_384_submit_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx2.d -MT obj/mb_mgr_hmac_sha_512_flush_avx2.o -o obj/mb_mgr_hmac_sha_512_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_flush_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx2.d -MT obj/mb_mgr_hmac_sha_512_submit_avx2.o -o obj/mb_mgr_hmac_sha_512_submit_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_hmac_sha_512_submit_avx2.asm 00:01:39.667 nasm -MD obj/mb_mgr_zuc_submit_flush_avx2.d -MT obj/mb_mgr_zuc_submit_flush_avx2.o -o obj/mb_mgr_zuc_submit_flush_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/mb_mgr_zuc_submit_flush_avx2.asm 00:01:39.668 nasm -MD obj/chacha20_avx2.d -MT obj/chacha20_avx2.o -o obj/chacha20_avx2.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/chacha20_avx2.asm 00:01:39.668 nasm -MD obj/gcm128_avx_gen4.d -MT obj/gcm128_avx_gen4.o -o obj/gcm128_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm128_avx_gen4.asm 00:01:39.668 nasm -MD obj/gcm192_avx_gen4.d -MT obj/gcm192_avx_gen4.o -o obj/gcm192_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm192_avx_gen4.asm 00:01:39.668 nasm -MD obj/gcm256_avx_gen4.d -MT obj/gcm256_avx_gen4.o -o obj/gcm256_avx_gen4.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx2/gcm256_avx_gen4.asm 00:01:39.668 nasm -MD obj/sha1_x16_avx512.d -MT obj/sha1_x16_avx512.o -o obj/sha1_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha1_x16_avx512.asm 00:01:39.668 ld -r -z ibt -z shstk -o obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o 00:01:39.668 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o 00:01:39.668 nasm -MD obj/sha256_x16_avx512.d -MT obj/sha256_x16_avx512.o -o obj/sha256_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha256_x16_avx512.asm 00:01:39.668 nasm -MD obj/sha512_x8_avx512.d -MT obj/sha512_x8_avx512.o -o obj/sha512_x8_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/sha512_x8_avx512.asm 00:01:39.668 nasm -MD obj/des_x16_avx512.d -MT obj/des_x16_avx512.o -o obj/des_x16_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/des_x16_avx512.asm 00:01:39.668 mv obj/md5_x4x2_avx.o.tmp obj/md5_x4x2_avx.o 00:01:39.668 mv obj/mb_mgr_zuc_submit_flush_avx.o.tmp obj/mb_mgr_zuc_submit_flush_avx.o 00:01:39.668 nasm -MD obj/cntr_vaes_avx512.d -MT obj/cntr_vaes_avx512.o -o obj/cntr_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_vaes_avx512.asm 00:01:39.668 nasm -MD obj/cntr_ccm_vaes_avx512.d -MT obj/cntr_ccm_vaes_avx512.o -o obj/cntr_ccm_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/cntr_ccm_vaes_avx512.asm 00:01:39.668 nasm -MD obj/aes_cbc_dec_vaes_avx512.d -MT obj/aes_cbc_dec_vaes_avx512.o -o obj/aes_cbc_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_dec_vaes_avx512.asm 00:01:39.668 nasm -MD obj/aes_cbc_enc_vaes_avx512.d -MT obj/aes_cbc_enc_vaes_avx512.o -o obj/aes_cbc_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbc_enc_vaes_avx512.asm 00:01:39.668 ld -r -z ibt -z shstk -o obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o 00:01:39.668 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o 00:01:39.668 nasm -MD obj/aes_cbcs_enc_vaes_avx512.d -MT obj/aes_cbcs_enc_vaes_avx512.o -o obj/aes_cbcs_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_enc_vaes_avx512.asm 00:01:39.668 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o 00:01:39.668 mv obj/sha512_x2_avx.o.tmp obj/sha512_x2_avx.o 00:01:39.929 nasm -MD obj/aes_cbcs_dec_vaes_avx512.d -MT obj/aes_cbcs_dec_vaes_avx512.o -o obj/aes_cbcs_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_cbcs_dec_vaes_avx512.asm 00:01:39.929 nasm -MD obj/aes_docsis_dec_avx512.d -MT obj/aes_docsis_dec_avx512.o -o obj/aes_docsis_dec_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_avx512.asm 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o 00:01:39.929 mv obj/mb_mgr_hmac_sha_224_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx2.o 00:01:39.929 mv obj/mb_mgr_hmac_md5_submit_avx2.o.tmp obj/mb_mgr_hmac_md5_submit_avx2.o 00:01:39.929 nasm -MD obj/aes_docsis_enc_avx512.d -MT obj/aes_docsis_enc_avx512.o -o obj/aes_docsis_enc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_avx512.asm 00:01:39.929 nasm -MD obj/aes_docsis_dec_vaes_avx512.d -MT obj/aes_docsis_dec_vaes_avx512.o -o obj/aes_docsis_dec_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_dec_vaes_avx512.asm 00:01:39.929 nasm -MD obj/aes_docsis_enc_vaes_avx512.d -MT obj/aes_docsis_enc_vaes_avx512.o -o obj/aes_docsis_enc_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/aes_docsis_enc_vaes_avx512.asm 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o 00:01:39.929 nasm -MD obj/zuc_avx512.d -MT obj/zuc_avx512.o -o obj/zuc_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/zuc_avx512.asm 00:01:39.929 mv obj/mb_mgr_hmac_md5_flush_avx2.o.tmp obj/mb_mgr_hmac_md5_flush_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_aes_submit_avx512.d -MT obj/mb_mgr_aes_submit_avx512.o -o obj/mb_mgr_aes_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_submit_avx512.asm 00:01:39.929 nasm -MD obj/mb_mgr_aes_flush_avx512.d -MT obj/mb_mgr_aes_flush_avx512.o -o obj/mb_mgr_aes_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_flush_avx512.asm 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o 00:01:39.929 mv obj/mb_mgr_hmac_flush_avx2.o.tmp obj/mb_mgr_hmac_flush_avx2.o 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_aes192_submit_avx512.d -MT obj/mb_mgr_aes192_submit_avx512.o -o obj/mb_mgr_aes192_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_submit_avx512.asm 00:01:39.929 nasm -MD obj/mb_mgr_aes192_flush_avx512.d -MT obj/mb_mgr_aes192_flush_avx512.o -o obj/mb_mgr_aes192_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes192_flush_avx512.asm 00:01:39.929 mv obj/mb_mgr_hmac_sha_384_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_aes256_submit_avx512.d -MT obj/mb_mgr_aes256_submit_avx512.o -o obj/mb_mgr_aes256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_submit_avx512.asm 00:01:39.929 mv obj/mb_mgr_hmac_submit_avx2.o.tmp obj/mb_mgr_hmac_submit_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_aes256_flush_avx512.d -MT obj/mb_mgr_aes256_flush_avx512.o -o obj/mb_mgr_aes256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_flush_avx512.asm 00:01:39.929 nasm -MD obj/mb_mgr_hmac_flush_avx512.d -MT obj/mb_mgr_hmac_flush_avx512.o -o obj/mb_mgr_hmac_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_flush_avx512.asm 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_hmac_submit_avx512.d -MT obj/mb_mgr_hmac_submit_avx512.o -o obj/mb_mgr_hmac_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_submit_avx512.asm 00:01:39.929 nasm -MD obj/mb_mgr_hmac_sha_224_flush_avx512.d -MT obj/mb_mgr_hmac_sha_224_flush_avx512.o -o obj/mb_mgr_hmac_sha_224_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_flush_avx512.asm 00:01:39.929 ld -r -z ibt -z shstk -o obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o 00:01:39.929 ld -r -z ibt -z shstk -o obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o 00:01:39.929 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o 00:01:39.929 mv obj/mb_mgr_hmac_sha_256_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx2.o 00:01:39.929 nasm -MD obj/mb_mgr_hmac_sha_224_submit_avx512.d -MT obj/mb_mgr_hmac_sha_224_submit_avx512.o -o obj/mb_mgr_hmac_sha_224_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_224_submit_avx512.asm 00:01:39.930 mv obj/sha512_x4_avx2.o.tmp obj/sha512_x4_avx2.o 00:01:39.930 mv obj/sha256_oct_avx2.o.tmp obj/sha256_oct_avx2.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o 00:01:39.930 mv obj/mb_mgr_hmac_sha_224_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx2.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_256_flush_avx512.d -MT obj/mb_mgr_hmac_sha_256_flush_avx512.o -o obj/mb_mgr_hmac_sha_256_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_flush_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o 00:01:39.930 mv obj/mb_mgr_hmac_sha_512_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx2.o 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_256_submit_avx512.d -MT obj/mb_mgr_hmac_sha_256_submit_avx512.o -o obj/mb_mgr_hmac_sha_256_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_256_submit_avx512.asm 00:01:39.930 mv obj/mb_mgr_hmac_sha_256_flush_avx2.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx2.o 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_384_flush_avx512.d -MT obj/mb_mgr_hmac_sha_384_flush_avx512.o -o obj/mb_mgr_hmac_sha_384_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_flush_avx512.asm 00:01:39.930 mv obj/sha1_x8_avx2.o.tmp obj/sha1_x8_avx2.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/pon_avx.o.tmp obj/pon_avx.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_384_submit_avx512.d -MT obj/mb_mgr_hmac_sha_384_submit_avx512.o -o obj/mb_mgr_hmac_sha_384_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_384_submit_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_512_flush_avx512.d -MT obj/mb_mgr_hmac_sha_512_flush_avx512.o -o obj/mb_mgr_hmac_sha_512_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_flush_avx512.asm 00:01:39.930 mv obj/pon_avx.o.tmp obj/pon_avx.o 00:01:39.930 mv obj/mb_mgr_hmac_sha_384_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx2.o 00:01:39.930 nasm -MD obj/mb_mgr_hmac_sha_512_submit_avx512.d -MT obj/mb_mgr_hmac_sha_512_submit_avx512.o -o obj/mb_mgr_hmac_sha_512_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_hmac_sha_512_submit_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_des_avx512.d -MT obj/mb_mgr_des_avx512.o -o obj/mb_mgr_des_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_des_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cmac_submit_flush_vaes_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.d -MT obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_zuc_submit_flush_avx512.d -MT obj/mb_mgr_zuc_submit_flush_avx512.o -o obj/mb_mgr_zuc_submit_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_avx512.asm 00:01:39.930 nasm -MD obj/mb_mgr_zuc_submit_flush_gfni_avx512.d -MT obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_zuc_submit_flush_gfni_avx512.asm 00:01:39.930 nasm -MD obj/chacha20_avx512.d -MT obj/chacha20_avx512.o -o obj/chacha20_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/chacha20_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o 00:01:39.930 nasm -MD obj/poly_avx512.d -MT obj/poly_avx512.o -o obj/poly_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_avx512.asm 00:01:39.930 nasm -MD obj/poly_fma_avx512.d -MT obj/poly_fma_avx512.o -o obj/poly_fma_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/poly_fma_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o 00:01:39.930 nasm -MD obj/ethernet_fcs_avx512.d -MT obj/ethernet_fcs_avx512.o -o obj/ethernet_fcs_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/ethernet_fcs_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o 00:01:39.930 mv obj/aes128_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbc_dec_by4_sse_no_aesni.o 00:01:39.930 mv obj/mb_mgr_aes_submit_avx512.o.tmp obj/mb_mgr_aes_submit_avx512.o 00:01:39.930 nasm -MD obj/crc16_x25_avx512.d -MT obj/crc16_x25_avx512.o -o obj/crc16_x25_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc16_x25_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o 00:01:39.930 mv obj/mb_mgr_hmac_sha_512_submit_avx2.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx2.o 00:01:39.930 mv obj/mb_mgr_hmac_submit_avx512.o.tmp obj/mb_mgr_hmac_submit_avx512.o 00:01:39.930 nasm -MD obj/crc32_refl_by16_vclmul_avx512.d -MT obj/crc32_refl_by16_vclmul_avx512.o -o obj/crc32_refl_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_refl_by16_vclmul_avx512.asm 00:01:39.930 nasm -MD obj/crc32_by16_vclmul_avx512.d -MT obj/crc32_by16_vclmul_avx512.o -o obj/crc32_by16_vclmul_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_by16_vclmul_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o 00:01:39.930 mv obj/sha1_x16_avx512.o.tmp obj/sha1_x16_avx512.o 00:01:39.930 nasm -MD obj/mb_mgr_aes_cbcs_1_9_submit_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_submit_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o 00:01:39.930 mv obj/aes_cbcs_enc_vaes_avx512.o.tmp obj/aes_cbcs_enc_vaes_avx512.o 00:01:39.930 nasm -MD obj/mb_mgr_aes_cbcs_1_9_flush_avx512.d -MT obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/mb_mgr_aes_cbcs_1_9_flush_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o 00:01:39.930 mv obj/ethernet_fcs_avx512.o.tmp obj/ethernet_fcs_avx512.o 00:01:39.930 mv obj/mb_mgr_aes192_submit_avx512.o.tmp obj/mb_mgr_aes192_submit_avx512.o 00:01:39.930 nasm -MD obj/crc32_sctp_avx512.d -MT obj/crc32_sctp_avx512.o -o obj/crc32_sctp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_sctp_avx512.asm 00:01:39.930 mv obj/mb_mgr_hmac_flush_avx512.o.tmp obj/mb_mgr_hmac_flush_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o 00:01:39.930 mv obj/mb_mgr_aes256_submit_avx512.o.tmp obj/mb_mgr_aes256_submit_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o 00:01:39.930 mv obj/crc16_x25_avx512.o.tmp obj/crc16_x25_avx512.o 00:01:39.930 nasm -MD obj/crc32_lte_avx512.d -MT obj/crc32_lte_avx512.o -o obj/crc32_lte_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_lte_avx512.asm 00:01:39.930 mv obj/mb_mgr_hmac_sha_512_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_512_flush_avx512.o 00:01:39.930 mv obj/crc32_refl_by16_vclmul_avx512.o.tmp obj/crc32_refl_by16_vclmul_avx512.o 00:01:39.930 nasm -MD obj/crc32_fp_avx512.d -MT obj/crc32_fp_avx512.o -o obj/crc32_fp_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_fp_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o 00:01:39.930 mv obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o.tmp obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o 00:01:39.930 nasm -MD obj/crc32_iuup_avx512.d -MT obj/crc32_iuup_avx512.o -o obj/crc32_iuup_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_iuup_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o 00:01:39.930 mv obj/mb_mgr_hmac_sha_384_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_384_submit_avx512.o 00:01:39.930 mv obj/crc32_by16_vclmul_avx512.o.tmp obj/crc32_by16_vclmul_avx512.o 00:01:39.930 nasm -MD obj/crc32_wimax_avx512.d -MT obj/crc32_wimax_avx512.o -o obj/crc32_wimax_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/crc32_wimax_avx512.asm 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o 00:01:39.930 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o 00:01:39.931 mv obj/crc32_sctp_avx512.o.tmp obj/crc32_sctp_avx512.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o 00:01:39.931 nasm -MD obj/gcm128_vaes_avx512.d -MT obj/gcm128_vaes_avx512.o -o obj/gcm128_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_vaes_avx512.asm 00:01:39.931 mv obj/mb_mgr_hmac_sha_256_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_256_flush_avx512.o 00:01:39.931 mv obj/mb_mgr_hmac_sha_384_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_384_flush_avx512.o 00:01:39.931 mv obj/crc32_lte_avx512.o.tmp obj/crc32_lte_avx512.o 00:01:39.931 nasm -MD obj/gcm192_vaes_avx512.d -MT obj/gcm192_vaes_avx512.o -o obj/gcm192_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_vaes_avx512.asm 00:01:39.931 mv obj/crc32_fp_avx512.o.tmp obj/crc32_fp_avx512.o 00:01:39.931 nasm -MD obj/gcm256_vaes_avx512.d -MT obj/gcm256_vaes_avx512.o -o obj/gcm256_vaes_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_vaes_avx512.asm 00:01:39.931 ld -r -z ibt -z shstk -o obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o 00:01:39.931 nasm -MD obj/gcm128_avx512.d -MT obj/gcm128_avx512.o -o obj/gcm128_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm128_avx512.asm 00:01:39.931 ld -r -z ibt -z shstk -o obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o 00:01:39.931 nasm -MD obj/gcm192_avx512.d -MT obj/gcm192_avx512.o -o obj/gcm192_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm192_avx512.asm 00:01:39.931 mv obj/crc32_iuup_avx512.o.tmp obj/crc32_iuup_avx512.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o 00:01:39.931 nasm -MD obj/gcm256_avx512.d -MT obj/gcm256_avx512.o -o obj/gcm256_avx512.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP avx512/gcm256_avx512.asm 00:01:39.931 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/mb_mgr_avx.c -o obj/mb_mgr_avx.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o 00:01:39.931 mv obj/crc32_wimax_avx512.o.tmp obj/crc32_wimax_avx512.o 00:01:39.931 mv obj/md5_x8x2_avx2.o.tmp obj/md5_x8x2_avx2.o 00:01:39.931 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/mb_mgr_avx2.c -o obj/mb_mgr_avx2.o 00:01:39.931 gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/mb_mgr_avx512.c -o obj/mb_mgr_avx512.o 00:01:39.931 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/mb_mgr_sse.c -o obj/mb_mgr_sse.o 00:01:39.931 mv obj/mb_mgr_aes_flush_avx512.o.tmp obj/mb_mgr_aes_flush_avx512.o 00:01:39.931 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/mb_mgr_sse_no_aesni.c -o obj/mb_mgr_sse_no_aesni.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/alloc.c -o obj/alloc.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/aes_xcbc_expand_key.c -o obj/aes_xcbc_expand_key.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/md5_one_block.c -o obj/md5_one_block.o 00:01:39.931 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/sha_sse.c -o obj/sha_sse.o 00:01:39.931 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/sha_avx.c -o obj/sha_avx.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_key.c -o obj/des_key.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o 00:01:39.931 mv obj/sha512_x8_avx512.o.tmp obj/sha512_x8_avx512.o 00:01:39.931 mv obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/des_basic.c -o obj/des_basic.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/version.c -o obj/version.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/cpu_feature.c -o obj/cpu_feature.o 00:01:39.931 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/aesni_emu.c -o obj/aesni_emu.o 00:01:39.931 mv obj/mb_mgr_hmac_sha_512_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_512_submit_avx512.o 00:01:39.931 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/kasumi_avx.c -o obj/kasumi_avx.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o 00:01:39.931 ld -r -z ibt -z shstk -o obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o 00:01:39.931 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/kasumi_iv.c -o obj/kasumi_iv.o 00:01:39.932 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/kasumi_sse.c -o obj/kasumi_sse.o 00:01:39.932 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/zuc_sse_top.c -o obj/zuc_sse_top.o 00:01:39.932 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/zuc_sse_no_aesni_top.c -o obj/zuc_sse_no_aesni_top.o 00:01:39.932 mv obj/mb_mgr_hmac_sha_256_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_256_submit_avx512.o 00:01:39.932 mv obj/mb_mgr_aes192_flush_avx512.o.tmp obj/mb_mgr_aes192_flush_avx512.o 00:01:39.932 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/zuc_avx_top.c -o obj/zuc_avx_top.o 00:01:39.932 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/zuc_avx2_top.c -o obj/zuc_avx2_top.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o 00:01:39.932 gcc -MMD -march=broadwell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx512/zuc_avx512_top.c -o obj/zuc_avx512_top.o 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/zuc_iv.c -o obj/zuc_iv.o 00:01:39.932 gcc -MMD -march=nehalem -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC sse/snow3g_sse.c -o obj/snow3g_sse.o 00:01:39.932 gcc -MMD -march=nehalem -mno-pclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC -O1 no-aesni/snow3g_sse_no_aesni.c -o obj/snow3g_sse_no_aesni.o 00:01:39.932 gcc -MMD -march=sandybridge -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx/snow3g_avx.c -o obj/snow3g_avx.o 00:01:39.932 mv obj/aes192_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes192_cbc_dec_by4_sse_no_aesni.o 00:01:39.932 gcc -MMD -march=haswell -maes -mpclmul -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC avx2/snow3g_avx2.c -o obj/snow3g_avx2.o 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_tables.c -o obj/snow3g_tables.o 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/snow3g_iv.c -o obj/snow3g_iv.o 00:01:39.932 nasm -MD obj/snow_v_sse.d -MT obj/snow_v_sse.o -o obj/snow_v_sse.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP sse/snow_v_sse.asm 00:01:39.932 nasm -MD obj/snow_v_sse_noaesni.d -MT obj/snow_v_sse_noaesni.o -o obj/snow_v_sse_noaesni.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ -I./ -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP no-aesni/snow_v_sse_noaesni.asm 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/mb_mgr_auto.c -o obj/mb_mgr_auto.o 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/error.c -o obj/error.o 00:01:39.932 gcc -MMD -msse4.2 -c -DLINUX -DNO_COMPAT_IMB_API_053 -fPIC -I include -I . -I no-aesni -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -fstack-protector -D_FORTIFY_SOURCE=2 -DSAFE_DATA -DSAFE_PARAM -DSAFE_LOOKUP -O3 -fPIC x86_64/gcm.c -o obj/gcm.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/poly_avx512.o.tmp obj/poly_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o 00:01:39.932 mv obj/poly_avx512.o.tmp obj/poly_avx512.o 00:01:39.932 mv obj/sha256_x16_avx512.o.tmp obj/sha256_x16_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o 00:01:39.932 mv obj/mb_mgr_zuc_submit_flush_gfni_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_gfni_avx512.o 00:01:39.932 mv obj/mb_mgr_zuc_submit_flush_avx512.o.tmp obj/mb_mgr_zuc_submit_flush_avx512.o 00:01:39.932 mv obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o 00:01:39.932 mv obj/mb_mgr_des_avx512.o.tmp obj/mb_mgr_des_avx512.o 00:01:39.932 mv obj/mb_mgr_aes256_flush_avx512.o.tmp obj/mb_mgr_aes256_flush_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o 00:01:39.932 mv obj/poly_fma_avx512.o.tmp obj/poly_fma_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o 00:01:39.932 mv obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o.tmp obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o 00:01:39.932 mv obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o 00:01:39.932 mv obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/snow_v_sse.o.tmp obj/snow_v_sse.o 00:01:39.932 mv obj/snow_v_sse.o.tmp obj/snow_v_sse.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o 00:01:39.932 mv obj/mb_mgr_hmac_sha_224_flush_avx512.o.tmp obj/mb_mgr_hmac_sha_224_flush_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o 00:01:39.932 mv obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o 00:01:39.932 mv obj/mb_mgr_hmac_sha_224_submit_avx512.o.tmp obj/mb_mgr_hmac_sha_224_submit_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o 00:01:39.932 ld -r -z ibt -z shstk -o obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o 00:01:39.932 mv obj/aes_cbcs_dec_vaes_avx512.o.tmp obj/aes_cbcs_dec_vaes_avx512.o 00:01:39.932 mv obj/snow_v_sse_noaesni.o.tmp obj/snow_v_sse_noaesni.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/zuc_common.o.tmp obj/zuc_common.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o 00:01:40.191 mv obj/zuc_common.o.tmp obj/zuc_common.o 00:01:40.191 mv obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o.tmp obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o 00:01:40.191 mv obj/aes256_cbc_dec_by4_sse_no_aesni.o.tmp obj/aes256_cbc_dec_by4_sse_no_aesni.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o 00:01:40.191 mv obj/aes_cbc_enc_vaes_avx512.o.tmp obj/aes_cbc_enc_vaes_avx512.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o 00:01:40.191 mv obj/aes_docsis_dec_avx512.o.tmp obj/aes_docsis_dec_avx512.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o 00:01:40.191 mv obj/mb_mgr_zuc_submit_flush_avx2.o.tmp obj/mb_mgr_zuc_submit_flush_avx2.o 00:01:40.191 ld -r -z ibt -z shstk -o obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o 00:01:40.450 mv obj/zuc_sse_gfni.o.tmp obj/zuc_sse_gfni.o 00:01:40.450 ld -r -z ibt -z shstk -o obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o 00:01:40.450 mv obj/aes_docsis_enc_avx512.o.tmp obj/aes_docsis_enc_avx512.o 00:01:40.450 ld -r -z ibt -z shstk -o obj/zuc_sse.o.tmp obj/zuc_sse.o 00:01:40.450 mv obj/zuc_sse.o.tmp obj/zuc_sse.o 00:01:40.708 ld -r -z ibt -z shstk -o obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o 00:01:40.708 mv obj/aes_docsis_enc_vaes_avx512.o.tmp obj/aes_docsis_enc_vaes_avx512.o 00:01:40.708 ld -r -z ibt -z shstk -o obj/chacha20_avx.o.tmp obj/chacha20_avx.o 00:01:40.708 mv obj/chacha20_avx.o.tmp obj/chacha20_avx.o 00:01:40.966 ld -r -z ibt -z shstk -o obj/zuc_avx.o.tmp obj/zuc_avx.o 00:01:40.966 mv obj/zuc_avx.o.tmp obj/zuc_avx.o 00:01:40.966 ld -r -z ibt -z shstk -o obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o 00:01:40.966 mv obj/pon_sse_no_aesni.o.tmp obj/pon_sse_no_aesni.o 00:01:40.966 ld -r -z ibt -z shstk -o obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o 00:01:40.966 mv obj/aes_cbc_dec_vaes_avx512.o.tmp obj/aes_cbc_dec_vaes_avx512.o 00:01:40.966 ld -r -z ibt -z shstk -o obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o 00:01:40.966 mv obj/chacha20_avx2.o.tmp obj/chacha20_avx2.o 00:01:41.224 ld -r -z ibt -z shstk -o obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o 00:01:41.224 mv obj/zuc_sse_no_aesni.o.tmp obj/zuc_sse_no_aesni.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/gcm256_sse.o.tmp obj/gcm256_sse.o 00:01:41.483 mv obj/gcm256_sse.o.tmp obj/gcm256_sse.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o 00:01:41.483 mv obj/gcm128_avx_gen2.o.tmp obj/gcm128_avx_gen2.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/gcm192_sse.o.tmp obj/gcm192_sse.o 00:01:41.483 mv obj/gcm192_sse.o.tmp obj/gcm192_sse.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o 00:01:41.483 mv obj/aes128_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes128_cntr_ccm_by8_sse_no_aesni.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/gcm128_sse.o.tmp obj/gcm128_sse.o 00:01:41.483 ld -r -z ibt -z shstk -o obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o 00:01:41.483 mv obj/gcm128_sse.o.tmp obj/gcm128_sse.o 00:01:41.483 mv obj/gcm256_avx_gen2.o.tmp obj/gcm256_avx_gen2.o 00:01:41.742 ld -r -z ibt -z shstk -o obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o 00:01:41.742 mv obj/gcm192_avx_gen2.o.tmp obj/gcm192_avx_gen2.o 00:01:42.002 ld -r -z ibt -z shstk -o obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o 00:01:42.002 mv obj/aes_docsis_dec_vaes_avx512.o.tmp obj/aes_docsis_dec_vaes_avx512.o 00:01:42.002 ld -r -z ibt -z shstk -o obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o 00:01:42.002 mv obj/aes256_cntr_ccm_by8_sse_no_aesni.o.tmp obj/aes256_cntr_ccm_by8_sse_no_aesni.o 00:01:42.002 ld -r -z ibt -z shstk -o obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o 00:01:42.002 mv obj/gcm192_avx512.o.tmp obj/gcm192_avx512.o 00:01:42.261 ld -r -z ibt -z shstk -o obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o 00:01:42.261 mv obj/gcm128_avx512.o.tmp obj/gcm128_avx512.o 00:01:42.261 ld -r -z ibt -z shstk -o obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o 00:01:42.261 mv obj/cntr_ccm_vaes_avx512.o.tmp obj/cntr_ccm_vaes_avx512.o 00:01:42.261 ld -r -z ibt -z shstk -o obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o 00:01:42.261 mv obj/chacha20_avx512.o.tmp obj/chacha20_avx512.o 00:01:42.519 ld -r -z ibt -z shstk -o obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o 00:01:42.519 mv obj/gcm256_avx512.o.tmp obj/gcm256_avx512.o 00:01:42.778 ld -r -z ibt -z shstk -o obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o 00:01:42.778 mv obj/gcm128_avx_gen4.o.tmp obj/gcm128_avx_gen4.o 00:01:42.778 ld -r -z ibt -z shstk -o obj/zuc_avx512.o.tmp obj/zuc_avx512.o 00:01:42.778 mv obj/zuc_avx512.o.tmp obj/zuc_avx512.o 00:01:43.037 ld -r -z ibt -z shstk -o obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o 00:01:43.037 mv obj/aes_ecb_by4_sse_no_aesni.o.tmp obj/aes_ecb_by4_sse_no_aesni.o 00:01:43.296 ld -r -z ibt -z shstk -o obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o 00:01:43.296 mv obj/gcm192_avx_gen4.o.tmp obj/gcm192_avx_gen4.o 00:01:43.863 ld -r -z ibt -z shstk -o obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o 00:01:43.863 mv obj/gcm256_avx_gen4.o.tmp obj/gcm256_avx_gen4.o 00:01:43.863 ld -r -z ibt -z shstk -o obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o 00:01:43.863 mv obj/aes128_cntr_by8_sse_no_aesni.o.tmp obj/aes128_cntr_by8_sse_no_aesni.o 00:01:44.430 ld -r -z ibt -z shstk -o obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o 00:01:44.430 mv obj/des_x16_avx512.o.tmp obj/des_x16_avx512.o 00:01:44.688 ld -r -z ibt -z shstk -o obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o 00:01:44.688 mv obj/aes192_cntr_by8_sse_no_aesni.o.tmp obj/aes192_cntr_by8_sse_no_aesni.o 00:01:45.318 ld -r -z ibt -z shstk -o obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o 00:01:45.318 mv obj/aes256_cntr_by8_sse_no_aesni.o.tmp obj/aes256_cntr_by8_sse_no_aesni.o 00:01:46.253 ld -r -z ibt -z shstk -o obj/chacha20_sse.o.tmp obj/chacha20_sse.o 00:01:46.253 mv obj/chacha20_sse.o.tmp obj/chacha20_sse.o 00:01:47.187 ld -r -z ibt -z shstk -o obj/zuc_avx2.o.tmp obj/zuc_avx2.o 00:01:47.187 mv obj/zuc_avx2.o.tmp obj/zuc_avx2.o 00:01:50.469 ld -r -z ibt -z shstk -o obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o 00:01:50.469 mv obj/gcm128_vaes_avx512.o.tmp obj/gcm128_vaes_avx512.o 00:01:51.036 ld -r -z ibt -z shstk -o obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o 00:01:51.036 mv obj/gcm192_vaes_avx512.o.tmp obj/gcm192_vaes_avx512.o 00:01:51.970 ld -r -z ibt -z shstk -o obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o 00:01:51.970 mv obj/gcm256_vaes_avx512.o.tmp obj/gcm256_vaes_avx512.o 00:02:00.087 ld -r -z ibt -z shstk -o obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o 00:02:00.087 mv obj/cntr_vaes_avx512.o.tmp obj/cntr_vaes_avx512.o 00:02:46.758 ld -r -z ibt -z shstk -o obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o 00:02:46.759 mv obj/gcm192_sse_no_aesni.o.tmp obj/gcm192_sse_no_aesni.o 00:02:54.908 ld -r -z ibt -z shstk -o obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o 00:02:54.908 mv obj/gcm256_sse_no_aesni.o.tmp obj/gcm256_sse_no_aesni.o 00:03:00.185 ld -r -z ibt -z shstk -o obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o 00:03:00.185 mv obj/gcm128_sse_no_aesni.o.tmp obj/gcm128_sse_no_aesni.o 00:03:00.186 gcc -shared -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -Wl,-soname,libIPSec_MB.so.1 -o libIPSec_MB.so.1.0.0 obj/aes_keyexp_128.o obj/aes_keyexp_192.o obj/aes_keyexp_256.o obj/aes_cmac_subkey_gen.o obj/save_xmms.o obj/clear_regs_mem_fns.o obj/const.o obj/aes128_ecbenc_x3.o obj/zuc_common.o obj/wireless_common.o obj/constant_lookup.o obj/crc32_refl_const.o obj/crc32_const.o obj/poly1305.o obj/chacha20_poly1305.o obj/aes128_cbc_dec_by4_sse_no_aesni.o obj/aes192_cbc_dec_by4_sse_no_aesni.o obj/aes256_cbc_dec_by4_sse_no_aesni.o obj/aes_cbc_enc_128_x4_no_aesni.o obj/aes_cbc_enc_192_x4_no_aesni.o obj/aes_cbc_enc_256_x4_no_aesni.o obj/aes128_cntr_by8_sse_no_aesni.o obj/aes192_cntr_by8_sse_no_aesni.o obj/aes256_cntr_by8_sse_no_aesni.o obj/aes_ecb_by4_sse_no_aesni.o obj/aes128_cntr_ccm_by8_sse_no_aesni.o obj/aes256_cntr_ccm_by8_sse_no_aesni.o obj/pon_sse_no_aesni.o obj/zuc_sse_no_aesni.o obj/aes_cfb_sse_no_aesni.o obj/aes128_cbc_mac_x4_no_aesni.o obj/aes256_cbc_mac_x4_no_aesni.o obj/aes_xcbc_mac_128_x4_no_aesni.o obj/mb_mgr_aes_flush_sse_no_aesni.o obj/mb_mgr_aes_submit_sse_no_aesni.o obj/mb_mgr_aes192_flush_sse_no_aesni.o obj/mb_mgr_aes192_submit_sse_no_aesni.o obj/mb_mgr_aes256_flush_sse_no_aesni.o obj/mb_mgr_aes256_submit_sse_no_aesni.o obj/mb_mgr_aes_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_cmac_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_flush_sse_no_aesni.o obj/mb_mgr_aes_xcbc_submit_sse_no_aesni.o obj/mb_mgr_zuc_submit_flush_sse_no_aesni.o obj/ethernet_fcs_sse_no_aesni.o obj/crc16_x25_sse_no_aesni.o obj/aes_cbcs_1_9_enc_128_x4_no_aesni.o obj/aes128_cbcs_1_9_dec_by4_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse.o obj/mb_mgr_aes128_cbcs_1_9_submit_sse_no_aesni.o obj/mb_mgr_aes128_cbcs_1_9_flush_sse_no_aesni.o obj/crc32_refl_by8_sse_no_aesni.o obj/crc32_by8_sse_no_aesni.o obj/crc32_sctp_sse_no_aesni.o obj/crc32_lte_sse_no_aesni.o obj/crc32_fp_sse_no_aesni.o obj/crc32_iuup_sse_no_aesni.o obj/crc32_wimax_sse_no_aesni.o obj/gcm128_sse_no_aesni.o obj/gcm192_sse_no_aesni.o obj/gcm256_sse_no_aesni.o obj/aes128_cbc_dec_by4_sse.o obj/aes128_cbc_dec_by8_sse.o obj/aes192_cbc_dec_by4_sse.o obj/aes192_cbc_dec_by8_sse.o obj/aes256_cbc_dec_by4_sse.o obj/aes256_cbc_dec_by8_sse.o obj/aes_cbc_enc_128_x4.o obj/aes_cbc_enc_192_x4.o obj/aes_cbc_enc_256_x4.o obj/aes_cbc_enc_128_x8_sse.o obj/aes_cbc_enc_192_x8_sse.o obj/aes_cbc_enc_256_x8_sse.o obj/pon_sse.o obj/aes128_cntr_by8_sse.o obj/aes192_cntr_by8_sse.o obj/aes256_cntr_by8_sse.o obj/aes_ecb_by4_sse.o obj/aes128_cntr_ccm_by8_sse.o obj/aes256_cntr_ccm_by8_sse.o obj/aes_cfb_sse.o obj/aes128_cbc_mac_x4.o obj/aes256_cbc_mac_x4.o obj/aes128_cbc_mac_x8_sse.o obj/aes256_cbc_mac_x8_sse.o obj/aes_xcbc_mac_128_x4.o obj/md5_x4x2_sse.o obj/sha1_mult_sse.o obj/sha1_one_block_sse.o obj/sha224_one_block_sse.o obj/sha256_one_block_sse.o obj/sha384_one_block_sse.o obj/sha512_one_block_sse.o obj/sha512_x2_sse.o obj/sha_256_mult_sse.o obj/sha1_ni_x2_sse.o obj/sha256_ni_x2_sse.o obj/zuc_sse.o obj/zuc_sse_gfni.o obj/mb_mgr_aes_flush_sse.o obj/mb_mgr_aes_submit_sse.o obj/mb_mgr_aes192_flush_sse.o obj/mb_mgr_aes192_submit_sse.o obj/mb_mgr_aes256_flush_sse.o obj/mb_mgr_aes256_submit_sse.o obj/mb_mgr_aes_flush_sse_x8.o obj/mb_mgr_aes_submit_sse_x8.o obj/mb_mgr_aes192_flush_sse_x8.o obj/mb_mgr_aes192_submit_sse_x8.o obj/mb_mgr_aes256_flush_sse_x8.o obj/mb_mgr_aes256_submit_sse_x8.o obj/mb_mgr_aes_cmac_submit_flush_sse.o obj/mb_mgr_aes256_cmac_submit_flush_sse.o obj/mb_mgr_aes_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes256_cmac_submit_flush_sse_x8.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse.o obj/mb_mgr_aes256_ccm_auth_submit_flush_sse_x8.o obj/mb_mgr_aes_xcbc_flush_sse.o obj/mb_mgr_aes_xcbc_submit_sse.o obj/mb_mgr_hmac_md5_flush_sse.o obj/mb_mgr_hmac_md5_submit_sse.o obj/mb_mgr_hmac_flush_sse.o obj/mb_mgr_hmac_submit_sse.o obj/mb_mgr_hmac_sha_224_flush_sse.o obj/mb_mgr_hmac_sha_224_submit_sse.o obj/mb_mgr_hmac_sha_256_flush_sse.o obj/mb_mgr_hmac_sha_256_submit_sse.o obj/mb_mgr_hmac_sha_384_flush_sse.o obj/mb_mgr_hmac_sha_384_submit_sse.o obj/mb_mgr_hmac_sha_512_flush_sse.o obj/mb_mgr_hmac_sha_512_submit_sse.o obj/mb_mgr_hmac_flush_ni_sse.o obj/mb_mgr_hmac_submit_ni_sse.o obj/mb_mgr_hmac_sha_224_flush_ni_sse.o obj/mb_mgr_hmac_sha_224_submit_ni_sse.o obj/mb_mgr_hmac_sha_256_flush_ni_sse.o obj/mb_mgr_hmac_sha_256_submit_ni_sse.o obj/mb_mgr_zuc_submit_flush_sse.o obj/mb_mgr_zuc_submit_flush_gfni_sse.o obj/ethernet_fcs_sse.o obj/crc16_x25_sse.o obj/crc32_sctp_sse.o obj/aes_cbcs_1_9_enc_128_x4.o obj/aes128_cbcs_1_9_dec_by4_sse.o obj/crc32_refl_by8_sse.o obj/crc32_by8_sse.o obj/crc32_lte_sse.o obj/crc32_fp_sse.o obj/crc32_iuup_sse.o obj/crc32_wimax_sse.o obj/chacha20_sse.o obj/memcpy_sse.o obj/gcm128_sse.o obj/gcm192_sse.o obj/gcm256_sse.o obj/aes_cbc_enc_128_x8.o obj/aes_cbc_enc_192_x8.o obj/aes_cbc_enc_256_x8.o obj/aes128_cbc_dec_by8_avx.o obj/aes192_cbc_dec_by8_avx.o obj/aes256_cbc_dec_by8_avx.o obj/pon_avx.o obj/aes128_cntr_by8_avx.o obj/aes192_cntr_by8_avx.o obj/aes256_cntr_by8_avx.o obj/aes128_cntr_ccm_by8_avx.o obj/aes256_cntr_ccm_by8_avx.o obj/aes_ecb_by4_avx.o obj/aes_cfb_avx.o obj/aes128_cbc_mac_x8.o obj/aes256_cbc_mac_x8.o obj/aes_xcbc_mac_128_x8.o obj/md5_x4x2_avx.o obj/sha1_mult_avx.o obj/sha1_one_block_avx.o obj/sha224_one_block_avx.o obj/sha256_one_block_avx.o obj/sha_256_mult_avx.o obj/sha384_one_block_avx.o obj/sha512_one_block_avx.o obj/sha512_x2_avx.o obj/zuc_avx.o obj/mb_mgr_aes_flush_avx.o obj/mb_mgr_aes_submit_avx.o obj/mb_mgr_aes192_flush_avx.o obj/mb_mgr_aes192_submit_avx.o obj/mb_mgr_aes256_flush_avx.o obj/mb_mgr_aes256_submit_avx.o obj/mb_mgr_aes_cmac_submit_flush_avx.o obj/mb_mgr_aes256_cmac_submit_flush_avx.o obj/mb_mgr_aes_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes256_ccm_auth_submit_flush_avx.o obj/mb_mgr_aes_xcbc_flush_avx.o obj/mb_mgr_aes_xcbc_submit_avx.o obj/mb_mgr_hmac_md5_flush_avx.o obj/mb_mgr_hmac_md5_submit_avx.o obj/mb_mgr_hmac_flush_avx.o obj/mb_mgr_hmac_submit_avx.o obj/mb_mgr_hmac_sha_224_flush_avx.o obj/mb_mgr_hmac_sha_224_submit_avx.o obj/mb_mgr_hmac_sha_256_flush_avx.o obj/mb_mgr_hmac_sha_256_submit_avx.o obj/mb_mgr_hmac_sha_384_flush_avx.o obj/mb_mgr_hmac_sha_384_submit_avx.o obj/mb_mgr_hmac_sha_512_flush_avx.o obj/mb_mgr_hmac_sha_512_submit_avx.o obj/mb_mgr_zuc_submit_flush_avx.o obj/ethernet_fcs_avx.o obj/crc16_x25_avx.o obj/aes_cbcs_1_9_enc_128_x8.o obj/aes128_cbcs_1_9_dec_by8_avx.o obj/mb_mgr_aes128_cbcs_1_9_submit_avx.o obj/mb_mgr_aes128_cbcs_1_9_flush_avx.o obj/crc32_refl_by8_avx.o obj/crc32_by8_avx.o obj/crc32_sctp_avx.o obj/crc32_lte_avx.o obj/crc32_fp_avx.o obj/crc32_iuup_avx.o obj/crc32_wimax_avx.o obj/chacha20_avx.o obj/memcpy_avx.o obj/gcm128_avx_gen2.o obj/gcm192_avx_gen2.o obj/gcm256_avx_gen2.o obj/md5_x8x2_avx2.o obj/sha1_x8_avx2.o obj/sha256_oct_avx2.o obj/sha512_x4_avx2.o obj/zuc_avx2.o obj/mb_mgr_hmac_md5_flush_avx2.o obj/mb_mgr_hmac_md5_submit_avx2.o obj/mb_mgr_hmac_flush_avx2.o obj/mb_mgr_hmac_submit_avx2.o obj/mb_mgr_hmac_sha_224_flush_avx2.o obj/mb_mgr_hmac_sha_224_submit_avx2.o obj/mb_mgr_hmac_sha_256_flush_avx2.o obj/mb_mgr_hmac_sha_256_submit_avx2.o obj/mb_mgr_hmac_sha_384_flush_avx2.o obj/mb_mgr_hmac_sha_384_submit_avx2.o obj/mb_mgr_hmac_sha_512_flush_avx2.o obj/mb_mgr_hmac_sha_512_submit_avx2.o obj/mb_mgr_zuc_submit_flush_avx2.o obj/chacha20_avx2.o obj/gcm128_avx_gen4.o obj/gcm192_avx_gen4.o obj/gcm256_avx_gen4.o obj/sha1_x16_avx512.o obj/sha256_x16_avx512.o obj/sha512_x8_avx512.o obj/des_x16_avx512.o obj/cntr_vaes_avx512.o obj/cntr_ccm_vaes_avx512.o obj/aes_cbc_dec_vaes_avx512.o obj/aes_cbc_enc_vaes_avx512.o obj/aes_cbcs_enc_vaes_avx512.o obj/aes_cbcs_dec_vaes_avx512.o obj/aes_docsis_dec_avx512.o obj/aes_docsis_enc_avx512.o obj/aes_docsis_dec_vaes_avx512.o obj/aes_docsis_enc_vaes_avx512.o obj/zuc_avx512.o obj/mb_mgr_aes_submit_avx512.o obj/mb_mgr_aes_flush_avx512.o obj/mb_mgr_aes192_submit_avx512.o obj/mb_mgr_aes192_flush_avx512.o obj/mb_mgr_aes256_submit_avx512.o obj/mb_mgr_aes256_flush_avx512.o obj/mb_mgr_hmac_flush_avx512.o obj/mb_mgr_hmac_submit_avx512.o obj/mb_mgr_hmac_sha_224_flush_avx512.o obj/mb_mgr_hmac_sha_224_submit_avx512.o obj/mb_mgr_hmac_sha_256_flush_avx512.o obj/mb_mgr_hmac_sha_256_submit_avx512.o obj/mb_mgr_hmac_sha_384_flush_avx512.o obj/mb_mgr_hmac_sha_384_submit_avx512.o obj/mb_mgr_hmac_sha_512_flush_avx512.o obj/mb_mgr_hmac_sha_512_submit_avx512.o obj/mb_mgr_des_avx512.o obj/mb_mgr_aes_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_cmac_submit_flush_vaes_avx512.o obj/mb_mgr_aes_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes256_ccm_auth_submit_flush_vaes_avx512.o obj/mb_mgr_aes_xcbc_submit_flush_vaes_avx512.o obj/mb_mgr_zuc_submit_flush_avx512.o obj/mb_mgr_zuc_submit_flush_gfni_avx512.o obj/chacha20_avx512.o obj/poly_avx512.o obj/poly_fma_avx512.o obj/ethernet_fcs_avx512.o obj/crc16_x25_avx512.o obj/crc32_refl_by16_vclmul_avx512.o obj/crc32_by16_vclmul_avx512.o obj/mb_mgr_aes_cbcs_1_9_submit_avx512.o obj/mb_mgr_aes_cbcs_1_9_flush_avx512.o obj/crc32_sctp_avx512.o obj/crc32_lte_avx512.o obj/crc32_fp_avx512.o obj/crc32_iuup_avx512.o obj/crc32_wimax_avx512.o obj/gcm128_vaes_avx512.o obj/gcm192_vaes_avx512.o obj/gcm256_vaes_avx512.o obj/gcm128_avx512.o obj/gcm192_avx512.o obj/gcm256_avx512.o obj/mb_mgr_avx.o obj/mb_mgr_avx2.o obj/mb_mgr_avx512.o obj/mb_mgr_sse.o obj/mb_mgr_sse_no_aesni.o obj/alloc.o obj/aes_xcbc_expand_key.o obj/md5_one_block.o obj/sha_sse.o obj/sha_avx.o obj/des_key.o obj/des_basic.o obj/version.o obj/cpu_feature.o obj/aesni_emu.o obj/kasumi_avx.o obj/kasumi_iv.o obj/kasumi_sse.o obj/zuc_sse_top.o obj/zuc_sse_no_aesni_top.o obj/zuc_avx_top.o obj/zuc_avx2_top.o obj/zuc_avx512_top.o obj/zuc_iv.o obj/snow3g_sse.o obj/snow3g_sse_no_aesni.o obj/snow3g_avx.o obj/snow3g_avx2.o obj/snow3g_tables.o obj/snow3g_iv.o obj/snow_v_sse.o obj/snow_v_sse_noaesni.o obj/mb_mgr_auto.o obj/error.o obj/gcm.o -lc 00:03:00.445 ln -f -s libIPSec_MB.so.1.0.0 ./libIPSec_MB.so.1 00:03:00.445 ln -f -s libIPSec_MB.so.1 ./libIPSec_MB.so 00:03:00.445 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:03:00.445 make -C test 00:03:00.445 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/test' 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o main.o main.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o gcm_test.o gcm_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ctr_test.o ctr_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o customop_test.o customop_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o des_test.o des_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ccm_test.o ccm_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o cmac_test.o cmac_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o utils.o utils.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_sha1_test.o hmac_sha1_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_sha256_sha512_test.o hmac_sha256_sha512_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hmac_md5_test.o hmac_md5_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o aes_test.o aes_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o sha_test.o sha_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chained_test.o chained_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o api_test.o api_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o pon_test.o pon_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ecb_test.o ecb_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o zuc_test.o zuc_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o kasumi_test.o kasumi_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o snow3g_test.o snow3g_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o direct_api_test.o direct_api_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o clear_mem_test.o clear_mem_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o hec_test.o hec_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o xcbc_test.o xcbc_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o aes_cbcs_test.o aes_cbcs_test.c 00:03:00.704 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o crc_test.o crc_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chacha_test.o chacha_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o poly1305_test.o poly1305_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o chacha20_poly1305_test.o chacha20_poly1305_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o null_test.o null_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o snow_v_test.o snow_v_test.c 00:03:00.705 gcc -MMD -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -c -o ipsec_xvalid.o ipsec_xvalid.c 00:03:00.705 nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm 00:03:00.705 ld -r -z ibt -z shstk -o misc.o.tmp misc.o 00:03:00.705 utils.c:166:32: warning: argument 2 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=] 00:03:00.705 166 | uint8_t arch_support[IMB_ARCH_NUM], 00:03:00.705 | ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:00.705 In file included from utils.c:35: 00:03:00.705 utils.h:39:54: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’} 00:03:00.705 39 | int update_flags_and_archs(const char *arg, uint8_t *arch_support, 00:03:00.705 | ~~~~~~~~~^~~~~~~~~~~~ 00:03:00.705 utils.c:207:21: warning: argument 1 of type ‘uint8_t[6]’ {aka ‘unsigned char[6]’} with mismatched bound [-Warray-parameter=] 00:03:00.705 207 | detect_arch(uint8_t arch_support[IMB_ARCH_NUM]) 00:03:00.705 | ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:00.705 utils.h:41:26: note: previously declared as ‘uint8_t *’ {aka ‘unsigned char *’} 00:03:00.705 41 | int detect_arch(uint8_t *arch_support); 00:03:00.705 | ~~~~~~~~~^~~~~~~~~~~~ 00:03:00.705 mv misc.o.tmp misc.o 00:03:00.705 In file included from null_test.c:33: 00:03:00.705 null_test.c: In function ‘test_null_hash’: 00:03:00.705 ../lib/intel-ipsec-mb.h:1235:10: warning: ‘cipher_key’ may be used uninitialized [-Wmaybe-uninitialized] 00:03:00.705 1235 | ((_mgr)->keyexp_128((_raw), (_enc), (_dec))) 00:03:00.705 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:00.705 null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’ 00:03:00.705 62 | IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust); 00:03:00.705 | ^~~~~~~~~~~~~~~~~~ 00:03:00.705 ../lib/intel-ipsec-mb.h:1235:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, void *, void *)’ 00:03:00.705 1235 | ((_mgr)->keyexp_128((_raw), (_enc), (_dec))) 00:03:00.705 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:00.705 null_test.c:62:9: note: in expansion of macro ‘IMB_AES_KEYEXP_128’ 00:03:00.705 62 | IMB_AES_KEYEXP_128(mb_mgr, cipher_key, expkey, dust); 00:03:00.705 | ^~~~~~~~~~~~~~~~~~ 00:03:00.705 null_test.c:47:33: note: ‘cipher_key’ declared here 00:03:00.705 47 | DECLARE_ALIGNED(uint8_t cipher_key[16], 16); 00:03:00.705 | ^~~~~~~~~~ 00:03:00.705 ../lib/intel-ipsec-mb.h:51:9: note: in definition of macro ‘DECLARE_ALIGNED’ 00:03:00.705 51 | decl __attribute__((aligned(alignval))) 00:03:00.705 | ^~~~ 00:03:01.641 gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib main.o gcm_test.o ctr_test.o customop_test.o des_test.o ccm_test.o cmac_test.o utils.o hmac_sha1_test.o hmac_sha256_sha512_test.o hmac_md5_test.o aes_test.o sha_test.o chained_test.o api_test.o pon_test.o ecb_test.o zuc_test.o kasumi_test.o snow3g_test.o direct_api_test.o clear_mem_test.o hec_test.o xcbc_test.o aes_cbcs_test.o crc_test.o chacha_test.o poly1305_test.o chacha20_poly1305_test.o null_test.o snow_v_test.o -lIPSec_MB -o ipsec_MB_testapp 00:03:01.901 gcc -fPIE -z noexecstack -z relro -z now -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_xvalid.o utils.o misc.o -lIPSec_MB -o ipsec_xvalid_test 00:03:01.901 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/test' 00:03:01.901 make -C perf 00:03:01.901 make[1]: Entering directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/perf' 00:03:02.160 gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2 -c -o ipsec_perf.o ipsec_perf.c 00:03:02.160 gcc -DLINUX -D_GNU_SOURCE -DNO_COMPAT_IMB_API_053 -W -Wall -Wextra -Wmissing-declarations -Wpointer-arith -Wcast-qual -Wundef -Wwrite-strings -Wformat -Wformat-security -Wunreachable-code -Wmissing-noreturn -Wsign-compare -Wno-endif-labels -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -pthread -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv -fcf-protection=full -I../lib/include -I../lib -O3 -fPIE -fstack-protector -D_FORTIFY_SOURCE=2 -c -o msr.o msr.c 00:03:02.160 nasm -MD misc.d -MT misc.o -o misc.o -Werror -felf64 -Xgnu -gdwarf -DLINUX -D__linux__ misc.asm 00:03:02.160 ld -r -z ibt -z shstk -o misc.o.tmp misc.o 00:03:02.160 mv misc.o.tmp misc.o 00:03:02.728 In file included from ipsec_perf.c:59: 00:03:02.728 ipsec_perf.c: In function ‘do_test_gcm’: 00:03:02.728 ../lib/intel-ipsec-mb.h:1382:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:03:02.728 1382 | ((_mgr)->gcm128_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’ 00:03:02.728 1937 | IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:02.728 ../lib/intel-ipsec-mb.h:1382:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:03:02.728 1382 | ((_mgr)->gcm128_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1937:17: note: in expansion of macro ‘IMB_AES128_GCM_PRE’ 00:03:02.728 1937 | IMB_AES128_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:02.728 ../lib/intel-ipsec-mb.h:1384:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:03:02.728 1384 | ((_mgr)->gcm192_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’ 00:03:02.728 1940 | IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:02.728 ../lib/intel-ipsec-mb.h:1384:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:03:02.728 1384 | ((_mgr)->gcm192_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1940:17: note: in expansion of macro ‘IMB_AES192_GCM_PRE’ 00:03:02.728 1940 | IMB_AES192_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:02.728 ../lib/intel-ipsec-mb.h:1386:10: warning: ‘key’ may be used uninitialized [-Wmaybe-uninitialized] 00:03:02.728 1386 | ((_mgr)->gcm256_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’ 00:03:02.728 1944 | IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:02.728 ../lib/intel-ipsec-mb.h:1386:10: note: by argument 1 of type ‘const void *’ to ‘void(const void *, struct gcm_key_data *)’ 00:03:02.728 1386 | ((_mgr)->gcm256_pre((_key_in), (_key_exp))) 00:03:02.728 | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:03:02.728 ipsec_perf.c:1944:17: note: in expansion of macro ‘IMB_AES256_GCM_PRE’ 00:03:02.728 1944 | IMB_AES256_GCM_PRE(mb_mgr, key, &gdata_key); 00:03:02.728 | ^~~~~~~~~~~~~~~~~~ 00:03:03.297 gcc -fPIE -z noexecstack -z relro -z now -pthread -fcf-protection=full -Wl,-z,ibt -Wl,-z,shstk -Wl,-z,cet-report=error -L../lib ipsec_perf.o msr.o misc.o -lIPSec_MB -o ipsec_perf 00:03:03.297 make[1]: Leaving directory '/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/perf' 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@119 -- $ DPDK_DRIVERS+=("crypto") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@120 -- $ DPDK_DRIVERS+=("$intel_ipsec_mb_drv") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@121 -- $ DPDK_DRIVERS+=("crypto/qat") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@122 -- $ DPDK_DRIVERS+=("compress/qat") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@123 -- $ DPDK_DRIVERS+=("common/qat") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@125 -- $ ge 22.11.4 21.11.0 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.11.0 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:03:03.297 10:13:16 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@128 -- $ DPDK_DRIVERS+=("bus/auxiliary") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@129 -- $ DPDK_DRIVERS+=("common/mlx5") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@130 -- $ DPDK_DRIVERS+=("common/mlx5/linux") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@131 -- $ DPDK_DRIVERS+=("crypto/mlx5") 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@132 -- $ mlx5_libs_added=y 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@134 -- $ dpdk_cflags+=' -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@135 -- $ dpdk_ldflags+=' -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@136 -- $ export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@136 -- $ LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 1 -eq 1 ]] 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@140 -- $ isal_dir=/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:03:03.297 10:13:16 build_native_dpdk -- common/autobuild_common.sh@141 -- $ git clone --branch v2.29.0 --depth 1 https://github.com/intel/isa-l.git /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:03:03.297 Cloning into '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l'... 00:03:04.235 Note: switching to '806b55ee578efd8158962b90121a4568eb1ecb66'. 00:03:04.235 00:03:04.235 You are in 'detached HEAD' state. You can look around, make experimental 00:03:04.235 changes and commit them, and you can discard any commits you make in this 00:03:04.235 state without impacting any branches by switching back to a branch. 00:03:04.235 00:03:04.235 If you want to create a new branch to retain commits you create, you may 00:03:04.235 do so (now or later) by using -c with the switch command. Example: 00:03:04.235 00:03:04.235 git switch -c 00:03:04.235 00:03:04.235 Or undo this operation with: 00:03:04.235 00:03:04.235 git switch - 00:03:04.235 00:03:04.235 Turn off this advice by setting config variable advice.detachedHead to false 00:03:04.235 00:03:04.235 10:13:16 build_native_dpdk -- common/autobuild_common.sh@143 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l 00:03:04.235 10:13:16 build_native_dpdk -- common/autobuild_common.sh@144 -- $ ./autogen.sh 00:03:06.765 libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'. 00:03:06.765 libtoolize: linking file 'build-aux/ltmain.sh' 00:03:07.333 libtoolize: Consider adding 'AC_CONFIG_MACRO_DIRS([m4])' to configure.ac, 00:03:07.333 libtoolize: and rerunning libtoolize and aclocal. 00:03:07.333 libtoolize: Consider adding '-I m4' to ACLOCAL_AMFLAGS in Makefile.am. 00:03:08.267 configure.ac:53: warning: The macro `AC_PROG_CC_STDC' is obsolete. 00:03:08.267 configure.ac:53: You should run autoupdate. 00:03:08.267 ./lib/autoconf/c.m4:1666: AC_PROG_CC_STDC is expanded from... 00:03:08.267 configure.ac:53: the top level 00:03:09.644 configure.ac:23: installing 'build-aux/compile' 00:03:09.644 configure.ac:25: installing 'build-aux/config.guess' 00:03:09.644 configure.ac:25: installing 'build-aux/config.sub' 00:03:09.644 configure.ac:12: installing 'build-aux/install-sh' 00:03:09.644 configure.ac:12: installing 'build-aux/missing' 00:03:09.644 Makefile.am: installing 'build-aux/depcomp' 00:03:09.644 parallel-tests: installing 'build-aux/test-driver' 00:03:09.903 00:03:09.903 ---------------------------------------------------------------- 00:03:09.903 Initialized build system. For a common configuration please run: 00:03:09.903 ---------------------------------------------------------------- 00:03:09.903 00:03:09.903 ./configure --prefix=/usr --libdir=/usr/lib64 00:03:09.903 00:03:09.903 10:13:22 build_native_dpdk -- common/autobuild_common.sh@145 -- $ ./configure 'CFLAGS=-fPIC -g -O2' --enable-shared=yes --prefix=/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build 00:03:10.163 checking for a BSD-compatible install... /usr/bin/install -c 00:03:10.163 checking whether build environment is sane... yes 00:03:10.163 checking for a race-free mkdir -p... /usr/bin/mkdir -p 00:03:10.163 checking for gawk... gawk 00:03:10.163 checking whether make sets $(MAKE)... yes 00:03:10.163 checking whether make supports nested variables... yes 00:03:10.163 checking how to create a pax tar archive... gnutar 00:03:10.163 checking whether make supports the include directive... yes (GNU style) 00:03:10.163 checking for gcc... gcc 00:03:10.422 checking whether the C compiler works... yes 00:03:10.422 checking for C compiler default output file name... a.out 00:03:10.422 checking for suffix of executables... 00:03:10.686 checking whether we are cross compiling... no 00:03:10.686 checking for suffix of object files... o 00:03:10.686 checking whether the compiler supports GNU C... yes 00:03:10.686 checking whether gcc accepts -g... yes 00:03:10.945 checking for gcc option to enable C11 features... none needed 00:03:10.945 checking whether gcc understands -c and -o together... yes 00:03:10.945 checking dependency style of gcc... gcc3 00:03:11.204 checking dependency style of gcc... gcc3 00:03:11.204 checking build system type... x86_64-pc-linux-gnu 00:03:11.204 checking host system type... x86_64-pc-linux-gnu 00:03:11.464 checking for stdio.h... yes 00:03:11.464 checking for stdlib.h... yes 00:03:11.464 checking for string.h... yes 00:03:11.464 checking for inttypes.h... yes 00:03:11.464 checking for stdint.h... yes 00:03:11.723 checking for strings.h... yes 00:03:11.723 checking for sys/stat.h... yes 00:03:11.723 checking for sys/types.h... yes 00:03:11.723 checking for unistd.h... yes 00:03:12.016 checking for wchar.h... yes 00:03:12.016 checking for minix/config.h... no 00:03:12.016 checking whether it is safe to define __EXTENSIONS__... yes 00:03:12.016 checking whether _XOPEN_SOURCE should be defined... no 00:03:12.016 checking whether make supports nested variables... (cached) yes 00:03:12.016 checking how to print strings... printf 00:03:12.016 checking for a sed that does not truncate output... /usr/bin/sed 00:03:12.016 checking for grep that handles long lines and -e... /usr/bin/grep 00:03:12.016 checking for egrep... /usr/bin/grep -E 00:03:12.016 checking for fgrep... /usr/bin/grep -F 00:03:12.016 checking for ld used by gcc... /usr/bin/ld 00:03:12.016 checking if the linker (/usr/bin/ld) is GNU ld... yes 00:03:12.275 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B 00:03:12.275 checking the name lister (/usr/bin/nm -B) interface... BSD nm 00:03:12.275 checking whether ln -s works... yes 00:03:12.275 checking the maximum length of command line arguments... 1572864 00:03:12.275 checking how to convert x86_64-pc-linux-gnu file names to x86_64-pc-linux-gnu format... func_convert_file_noop 00:03:12.276 checking how to convert x86_64-pc-linux-gnu file names to toolchain format... func_convert_file_noop 00:03:12.276 checking for /usr/bin/ld option to reload object files... -r 00:03:12.276 checking for file... file 00:03:12.276 checking for objdump... objdump 00:03:12.276 checking how to recognize dependent libraries... pass_all 00:03:12.276 checking for dlltool... no 00:03:12.276 checking how to associate runtime and link libraries... printf %s\n 00:03:12.276 checking for ar... ar 00:03:12.276 checking for archiver @FILE support... @ 00:03:12.276 checking for strip... strip 00:03:12.276 checking for ranlib... ranlib 00:03:12.535 checking command to parse /usr/bin/nm -B output from gcc object... ok 00:03:12.535 checking for sysroot... no 00:03:12.535 checking for a working dd... /usr/bin/dd 00:03:12.535 checking how to truncate binary pipes... /usr/bin/dd bs=4096 count=1 00:03:12.535 checking for mt... no 00:03:12.793 checking if : is a manifest tool... no 00:03:12.793 checking for dlfcn.h... yes 00:03:12.793 checking for objdir... .libs 00:03:13.052 checking if gcc supports -fno-rtti -fno-exceptions... no 00:03:13.052 checking for gcc option to produce PIC... -fPIC -DPIC 00:03:13.052 checking if gcc PIC flag -fPIC -DPIC works... yes 00:03:13.052 checking if gcc static flag -static works... yes 00:03:13.311 checking if gcc supports -c -o file.o... yes 00:03:13.311 checking if gcc supports -c -o file.o... (cached) yes 00:03:13.311 checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes 00:03:13.311 checking whether -lc should be explicitly linked in... no 00:03:13.570 checking dynamic linker characteristics... GNU/Linux ld.so 00:03:13.570 checking how to hardcode library paths into programs... immediate 00:03:13.570 checking whether stripping libraries is possible... yes 00:03:13.570 checking if libtool supports shared libraries... yes 00:03:13.570 checking whether to build shared libraries... yes 00:03:13.570 checking whether to build static libraries... yes 00:03:13.570 checking for a sed that does not truncate output... (cached) /usr/bin/sed 00:03:13.570 checking for yasm... yes 00:03:13.570 checking for modern yasm... yes 00:03:13.570 checking for optional yasm AVX512 support... no 00:03:13.570 checking for nasm... yes 00:03:13.570 checking for modern nasm... yes 00:03:13.570 checking for optional nasm AVX512 support... yes 00:03:13.570 checking for additional nasm AVX512 support... yes 00:03:13.570 Using nasm args target "linux" "-f elf64" 00:03:13.570 checking for limits.h... yes 00:03:13.570 checking for stdint.h... (cached) yes 00:03:13.570 checking for stdlib.h... (cached) yes 00:03:13.570 checking for string.h... (cached) yes 00:03:13.830 checking for inline... inline 00:03:13.830 checking for size_t... yes 00:03:13.830 checking for uint16_t... yes 00:03:13.830 checking for uint32_t... yes 00:03:14.089 checking for uint64_t... yes 00:03:14.089 checking for uint8_t... yes 00:03:14.089 checking for GNU libc compatible malloc... yes 00:03:14.348 checking for memmove... yes 00:03:14.348 checking for memset... yes 00:03:14.348 checking for getopt... yes 00:03:14.606 checking that generated files are newer than configure... done 00:03:14.606 configure: creating ./config.status 00:03:15.983 config.status: creating Makefile 00:03:15.983 config.status: creating libisal.pc 00:03:15.983 config.status: executing depfiles commands 00:03:17.363 config.status: executing libtool commands 00:03:17.363 00:03:17.363 isa-l 2.29.0 00:03:17.363 ===== 00:03:17.363 00:03:17.363 prefix: /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build 00:03:17.363 sysconfdir: ${prefix}/etc 00:03:17.363 libdir: ${exec_prefix}/lib 00:03:17.363 includedir: ${prefix}/include 00:03:17.363 00:03:17.363 compiler: gcc 00:03:17.363 cflags: -fPIC -g -O2 00:03:17.363 ldflags: 00:03:17.363 00:03:17.363 debug: no 00:03:17.363 00:03:17.623 10:13:30 build_native_dpdk -- common/autobuild_common.sh@146 -- $ ln -s /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/include /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/isa-l 00:03:17.623 10:13:30 build_native_dpdk -- common/autobuild_common.sh@147 -- $ make -j112 all 00:03:17.623 Building isa-l.h 00:03:17.623 make --no-print-directory all-am 00:03:17.883 CC erasure_code/ec_highlevel_func.lo 00:03:17.883 MKTMP erasure_code/gf_vect_mul_sse.s 00:03:17.883 MKTMP erasure_code/gf_vect_mul_avx.s 00:03:17.883 MKTMP erasure_code/gf_vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_2vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_3vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_4vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_5vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_6vect_dot_prod_sse.s 00:03:17.883 MKTMP erasure_code/gf_2vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_3vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_4vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_5vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_6vect_dot_prod_avx.s 00:03:17.883 MKTMP erasure_code/gf_2vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_3vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_4vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_5vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_6vect_dot_prod_avx2.s 00:03:17.883 MKTMP erasure_code/gf_vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_3vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_2vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_4vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_5vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_6vect_mad_sse.s 00:03:17.883 MKTMP erasure_code/gf_vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_2vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_3vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_4vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_5vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_6vect_mad_avx.s 00:03:17.883 MKTMP erasure_code/gf_vect_mad_avx2.s 00:03:17.883 MKTMP erasure_code/gf_2vect_mad_avx2.s 00:03:17.883 MKTMP erasure_code/gf_3vect_mad_avx2.s 00:03:17.884 MKTMP erasure_code/gf_4vect_mad_avx2.s 00:03:17.884 MKTMP erasure_code/gf_5vect_mad_avx2.s 00:03:17.884 MKTMP erasure_code/gf_6vect_mad_avx2.s 00:03:17.884 MKTMP erasure_code/ec_multibinary.s 00:03:17.884 MKTMP erasure_code/gf_vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_2vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_3vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_4vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_5vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_6vect_dot_prod_avx512.s 00:03:17.884 MKTMP erasure_code/gf_vect_mad_avx512.s 00:03:17.884 MKTMP erasure_code/gf_2vect_mad_avx512.s 00:03:17.884 MKTMP erasure_code/gf_3vect_mad_avx512.s 00:03:17.884 MKTMP erasure_code/gf_4vect_mad_avx512.s 00:03:17.884 MKTMP erasure_code/gf_5vect_mad_avx512.s 00:03:17.884 MKTMP erasure_code/gf_6vect_mad_avx512.s 00:03:17.884 MKTMP raid/xor_gen_sse.s 00:03:17.884 MKTMP raid/pq_gen_sse.s 00:03:17.884 MKTMP raid/xor_check_sse.s 00:03:17.884 MKTMP raid/pq_check_sse.s 00:03:17.884 MKTMP raid/pq_gen_avx.s 00:03:17.884 MKTMP raid/xor_gen_avx.s 00:03:17.884 MKTMP raid/pq_gen_avx2.s 00:03:17.884 MKTMP raid/xor_gen_avx512.s 00:03:17.884 MKTMP raid/pq_gen_avx512.s 00:03:17.884 MKTMP raid/raid_multibinary.s 00:03:17.884 MKTMP crc/crc16_t10dif_01.s 00:03:17.884 MKTMP crc/crc16_t10dif_by4.s 00:03:17.884 MKTMP crc/crc16_t10dif_02.s 00:03:17.884 MKTMP crc/crc16_t10dif_by16_10.s 00:03:17.884 MKTMP crc/crc16_t10dif_copy_by4.s 00:03:17.884 MKTMP crc/crc16_t10dif_copy_by4_02.s 00:03:17.884 MKTMP crc/crc32_ieee_01.s 00:03:17.884 MKTMP crc/crc32_ieee_02.s 00:03:17.884 MKTMP crc/crc32_ieee_by4.s 00:03:17.884 MKTMP crc/crc32_ieee_by16_10.s 00:03:17.884 MKTMP crc/crc32_iscsi_01.s 00:03:17.884 MKTMP crc/crc32_iscsi_00.s 00:03:17.884 MKTMP crc/crc_multibinary.s 00:03:17.884 MKTMP crc/crc64_multibinary.s 00:03:17.884 MKTMP crc/crc64_ecma_refl_by8.s 00:03:17.884 MKTMP crc/crc64_ecma_refl_by16_10.s 00:03:17.884 MKTMP crc/crc64_ecma_norm_by8.s 00:03:17.884 MKTMP crc/crc64_ecma_norm_by16_10.s 00:03:17.884 MKTMP crc/crc64_iso_refl_by8.s 00:03:17.884 MKTMP crc/crc64_iso_refl_by16_10.s 00:03:17.884 MKTMP crc/crc64_iso_norm_by8.s 00:03:17.884 MKTMP crc/crc64_iso_norm_by16_10.s 00:03:17.884 MKTMP crc/crc64_jones_refl_by8.s 00:03:17.884 MKTMP crc/crc64_jones_refl_by16_10.s 00:03:17.884 MKTMP crc/crc64_jones_norm_by8.s 00:03:17.884 MKTMP crc/crc64_jones_norm_by16_10.s 00:03:17.884 MKTMP crc/crc32_gzip_refl_by8_02.s 00:03:17.884 MKTMP crc/crc32_gzip_refl_by8.s 00:03:17.884 MKTMP crc/crc32_gzip_refl_by16_10.s 00:03:17.884 MKTMP igzip/igzip_body.s 00:03:17.884 MKTMP igzip/igzip_finish.s 00:03:17.884 MKTMP igzip/igzip_icf_body_h1_gr_bt.s 00:03:17.884 MKTMP igzip/igzip_icf_finish.s 00:03:17.884 MKTMP igzip/rfc1951_lookup.s 00:03:17.884 MKTMP igzip/adler32_sse.s 00:03:17.884 MKTMP igzip/adler32_avx2_4.s 00:03:17.884 MKTMP igzip/igzip_multibinary.s 00:03:17.884 MKTMP igzip/igzip_update_histogram_01.s 00:03:17.884 MKTMP igzip/igzip_update_histogram_04.s 00:03:17.884 MKTMP igzip/igzip_decode_block_stateless_01.s 00:03:17.884 MKTMP igzip/igzip_decode_block_stateless_04.s 00:03:17.884 MKTMP igzip/igzip_inflate_multibinary.s 00:03:17.884 MKTMP igzip/encode_df_04.s 00:03:17.884 MKTMP igzip/proc_heap.s 00:03:17.884 MKTMP igzip/encode_df_06.s 00:03:17.884 MKTMP igzip/igzip_deflate_hash.s 00:03:17.884 MKTMP igzip/igzip_gen_icf_map_lh1_06.s 00:03:17.884 MKTMP igzip/igzip_gen_icf_map_lh1_04.s 00:03:17.884 MKTMP igzip/igzip_set_long_icf_fg_04.s 00:03:17.884 MKTMP igzip/igzip_set_long_icf_fg_06.s 00:03:17.884 MKTMP mem/mem_zero_detect_avx.s 00:03:17.884 MKTMP mem/mem_zero_detect_sse.s 00:03:17.884 MKTMP mem/mem_multibinary.s 00:03:17.884 CC programs/igzip_cli.o 00:03:17.884 CC erasure_code/ec_base.lo 00:03:17.884 CC raid/raid_base.lo 00:03:17.884 CC crc/crc_base.lo 00:03:17.884 CC crc/crc64_base.lo 00:03:17.884 CC igzip/igzip.lo 00:03:17.884 CC igzip/igzip_icf_base.lo 00:03:17.884 CC igzip/hufftables_c.lo 00:03:17.884 CC igzip/igzip_base.lo 00:03:17.884 CC igzip/encode_df.lo 00:03:17.884 CC igzip/adler32_base.lo 00:03:17.884 CC igzip/flatten_ll.lo 00:03:17.884 CC igzip/igzip_icf_body.lo 00:03:17.884 CC igzip/huff_codes.lo 00:03:17.884 CC igzip/igzip_inflate.lo 00:03:17.884 CC mem/mem_zero_detect_base.lo 00:03:17.884 CCAS erasure_code/gf_vect_mul_sse.lo 00:03:17.884 CCAS erasure_code/gf_vect_mul_avx.lo 00:03:17.884 CCAS erasure_code/gf_vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_vect_dot_prod_avx.lo 00:03:17.884 CCAS erasure_code/gf_vect_dot_prod_avx2.lo 00:03:17.884 CCAS erasure_code/gf_2vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_5vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_3vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_4vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_6vect_dot_prod_sse.lo 00:03:17.884 CCAS erasure_code/gf_2vect_dot_prod_avx.lo 00:03:18.143 CCAS erasure_code/gf_3vect_dot_prod_avx.lo 00:03:18.143 CCAS erasure_code/gf_4vect_dot_prod_avx.lo 00:03:18.143 CCAS erasure_code/gf_6vect_dot_prod_avx.lo 00:03:18.143 CCAS erasure_code/gf_5vect_dot_prod_avx.lo 00:03:18.143 CCAS erasure_code/gf_2vect_dot_prod_avx2.lo 00:03:18.143 CCAS erasure_code/gf_3vect_dot_prod_avx2.lo 00:03:18.143 CCAS erasure_code/gf_4vect_dot_prod_avx2.lo 00:03:18.143 CCAS erasure_code/gf_5vect_dot_prod_avx2.lo 00:03:18.143 CCAS erasure_code/gf_6vect_dot_prod_avx2.lo 00:03:18.143 CCAS erasure_code/gf_vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_2vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_3vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_4vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_6vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_5vect_mad_sse.lo 00:03:18.143 CCAS erasure_code/gf_vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_2vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_3vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_4vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_5vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_6vect_mad_avx.lo 00:03:18.143 CCAS erasure_code/gf_vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/gf_2vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/gf_3vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/gf_4vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/gf_5vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/gf_6vect_mad_avx2.lo 00:03:18.143 CCAS erasure_code/ec_multibinary.lo 00:03:18.143 CCAS erasure_code/gf_vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_3vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_2vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_4vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_5vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_6vect_dot_prod_avx512.lo 00:03:18.143 CCAS erasure_code/gf_2vect_mad_avx512.lo 00:03:18.143 CCAS erasure_code/gf_vect_mad_avx512.lo 00:03:18.143 CCAS erasure_code/gf_3vect_mad_avx512.lo 00:03:18.143 CCAS erasure_code/gf_4vect_mad_avx512.lo 00:03:18.143 CCAS erasure_code/gf_5vect_mad_avx512.lo 00:03:18.143 CCAS raid/xor_gen_sse.lo 00:03:18.143 CCAS erasure_code/gf_6vect_mad_avx512.lo 00:03:18.143 CCAS raid/pq_gen_sse.lo 00:03:18.143 CCAS raid/xor_check_sse.lo 00:03:18.143 CCAS raid/pq_gen_avx.lo 00:03:18.143 CCAS raid/pq_check_sse.lo 00:03:18.143 CCAS raid/xor_gen_avx.lo 00:03:18.143 CCAS raid/pq_gen_avx2.lo 00:03:18.143 CCAS raid/xor_gen_avx512.lo 00:03:18.143 CCAS raid/pq_gen_avx512.lo 00:03:18.144 CCAS crc/crc16_t10dif_01.lo 00:03:18.144 CCAS raid/raid_multibinary.lo 00:03:18.144 CCAS crc/crc16_t10dif_02.lo 00:03:18.144 CCAS crc/crc16_t10dif_by4.lo 00:03:18.144 CCAS crc/crc16_t10dif_by16_10.lo 00:03:18.144 CCAS crc/crc16_t10dif_copy_by4.lo 00:03:18.144 CCAS crc/crc16_t10dif_copy_by4_02.lo 00:03:18.144 CCAS crc/crc32_ieee_01.lo 00:03:18.144 CCAS crc/crc32_ieee_02.lo 00:03:18.144 CCAS crc/crc32_ieee_by4.lo 00:03:18.144 CCAS crc/crc32_ieee_by16_10.lo 00:03:18.144 CCAS crc/crc32_iscsi_01.lo 00:03:18.144 CCAS crc/crc32_iscsi_00.lo 00:03:18.144 CCAS crc/crc_multibinary.lo 00:03:18.144 CCAS crc/crc64_multibinary.lo 00:03:18.144 CCAS crc/crc64_ecma_refl_by8.lo 00:03:18.144 CCAS crc/crc64_ecma_refl_by16_10.lo 00:03:18.144 CCAS crc/crc64_ecma_norm_by8.lo 00:03:18.144 CCAS crc/crc64_ecma_norm_by16_10.lo 00:03:18.144 CCAS crc/crc64_iso_refl_by8.lo 00:03:18.144 CCAS crc/crc64_iso_refl_by16_10.lo 00:03:18.144 CCAS crc/crc64_iso_norm_by8.lo 00:03:18.144 CCAS crc/crc64_iso_norm_by16_10.lo 00:03:18.144 CCAS crc/crc64_jones_refl_by8.lo 00:03:18.144 CCAS crc/crc64_jones_refl_by16_10.lo 00:03:18.144 CCAS crc/crc64_jones_norm_by8.lo 00:03:18.144 CCAS crc/crc64_jones_norm_by16_10.lo 00:03:18.144 CCAS crc/crc32_gzip_refl_by8.lo 00:03:18.144 CCAS crc/crc32_gzip_refl_by8_02.lo 00:03:18.144 CCAS crc/crc32_gzip_refl_by16_10.lo 00:03:18.144 CCAS igzip/igzip_body.lo 00:03:18.144 CCAS igzip/igzip_finish.lo 00:03:18.144 CCAS igzip/igzip_icf_body_h1_gr_bt.lo 00:03:18.144 CCAS igzip/igzip_icf_finish.lo 00:03:18.144 CCAS igzip/rfc1951_lookup.lo 00:03:18.144 CCAS igzip/adler32_sse.lo 00:03:18.144 CCAS igzip/adler32_avx2_4.lo 00:03:18.144 CCAS igzip/igzip_multibinary.lo 00:03:18.144 CCAS igzip/igzip_update_histogram_01.lo 00:03:18.144 CCAS igzip/igzip_update_histogram_04.lo 00:03:18.144 CCAS igzip/igzip_decode_block_stateless_01.lo 00:03:18.144 CCAS igzip/igzip_decode_block_stateless_04.lo 00:03:18.144 CCAS igzip/igzip_inflate_multibinary.lo 00:03:18.144 CCAS igzip/encode_df_04.lo 00:03:18.144 CCAS igzip/encode_df_06.lo 00:03:18.144 CCAS igzip/proc_heap.lo 00:03:18.144 CCAS igzip/igzip_deflate_hash.lo 00:03:18.144 CCAS igzip/igzip_gen_icf_map_lh1_06.lo 00:03:18.144 CCAS igzip/igzip_gen_icf_map_lh1_04.lo 00:03:18.144 CCAS igzip/igzip_set_long_icf_fg_06.lo 00:03:18.144 CCAS igzip/igzip_set_long_icf_fg_04.lo 00:03:18.144 CCAS mem/mem_zero_detect_avx.lo 00:03:18.144 CCAS mem/mem_zero_detect_sse.lo 00:03:18.144 CCAS mem/mem_multibinary.lo 00:03:22.331 CCLD libisal.la 00:03:22.331 CCLD programs/igzip 00:03:22.591 rm erasure_code/gf_5vect_dot_prod_avx512.s erasure_code/gf_3vect_mad_avx.s erasure_code/gf_5vect_dot_prod_avx2.s erasure_code/gf_6vect_dot_prod_avx.s crc/crc16_t10dif_01.s crc/crc32_iscsi_00.s erasure_code/gf_5vect_dot_prod_avx.s igzip/encode_df_04.s erasure_code/gf_6vect_mad_sse.s erasure_code/gf_4vect_dot_prod_sse.s erasure_code/gf_5vect_mad_avx512.s crc/crc16_t10dif_copy_by4.s erasure_code/gf_5vect_mad_avx2.s erasure_code/gf_vect_mad_avx2.s igzip/proc_heap.s erasure_code/gf_3vect_dot_prod_sse.s igzip/igzip_set_long_icf_fg_06.s crc/crc64_jones_refl_by8.s erasure_code/gf_vect_dot_prod_avx2.s igzip/encode_df_06.s crc/crc_multibinary.s erasure_code/gf_4vect_mad_avx512.s erasure_code/gf_2vect_mad_avx2.s erasure_code/gf_4vect_mad_avx.s igzip/igzip_set_long_icf_fg_04.s crc/crc64_iso_refl_by8.s crc/crc16_t10dif_by16_10.s erasure_code/gf_2vect_dot_prod_avx2.s igzip/igzip_gen_icf_map_lh1_04.s raid/xor_check_sse.s erasure_code/gf_5vect_mad_avx.s raid/pq_gen_sse.s erasure_code/gf_vect_mad_avx.s erasure_code/gf_5vect_dot_prod_sse.s erasure_code/ec_multibinary.s crc/crc64_iso_norm_by16_10.s igzip/rfc1951_lookup.s raid/pq_gen_avx2.s erasure_code/gf_6vect_mad_avx.s crc/crc32_gzip_refl_by8.s igzip/igzip_gen_icf_map_lh1_06.s erasure_code/gf_3vect_dot_prod_avx2.s erasure_code/gf_2vect_mad_avx512.s igzip/igzip_update_histogram_04.s crc/crc64_ecma_norm_by16_10.s crc/crc32_ieee_by4.s erasure_code/gf_4vect_dot_prod_avx.s crc/crc16_t10dif_02.s erasure_code/gf_2vect_mad_sse.s raid/xor_gen_sse.s erasure_code/gf_5vect_mad_sse.s erasure_code/gf_3vect_dot_prod_avx512.s erasure_code/gf_3vect_mad_avx512.s raid/pq_gen_avx.s erasure_code/gf_2vect_dot_prod_sse.s igzip/igzip_multibinary.s igzip/igzip_deflate_hash.s erasure_code/gf_vect_mad_avx512.s raid/pq_gen_avx512.s igzip/adler32_sse.s crc/crc32_iscsi_01.s crc/crc16_t10dif_by4.s erasure_code/gf_6vect_dot_prod_avx2.s crc/crc32_gzip_refl_by16_10.s raid/xor_gen_avx512.s erasure_code/gf_vect_dot_prod_avx.s igzip/igzip_icf_finish.s erasure_code/gf_vect_mad_sse.s erasure_code/gf_vect_mul_sse.s erasure_code/gf_6vect_mad_avx512.s igzip/igzip_decode_block_stateless_04.s erasure_code/gf_6vect_mad_avx2.s crc/crc64_ecma_refl_by16_10.s raid/xor_gen_avx.s erasure_code/gf_6vect_dot_prod_avx512.s erasure_code/gf_2vect_mad_avx.s erasure_code/gf_2vect_dot_prod_avx512.s crc/crc32_ieee_by16_10.s crc/crc64_iso_refl_by16_10.s erasure_code/gf_3vect_mad_sse.s raid/pq_check_sse.s erasure_code/gf_2vect_dot_prod_avx.s mem/mem_zero_detect_avx.s crc/crc32_ieee_01.s crc/crc64_jones_refl_by16_10.s crc/crc64_multibinary.s mem/mem_multibinary.s raid/raid_multibinary.s erasure_code/gf_3vect_dot_prod_avx.s crc/crc32_ieee_02.s mem/mem_zero_detect_sse.s igzip/igzip_decode_block_stateless_01.s erasure_code/gf_4vect_dot_prod_avx2.s crc/crc32_gzip_refl_by8_02.s igzip/igzip_finish.s erasure_code/gf_4vect_mad_avx2.s crc/crc16_t10dif_copy_by4_02.s erasure_code/gf_vect_dot_prod_sse.s erasure_code/gf_3vect_mad_avx2.s erasure_code/gf_vect_mul_avx.s igzip/adler32_avx2_4.s erasure_code/gf_4vect_mad_sse.s igzip/igzip_inflate_multibinary.s crc/crc64_ecma_norm_by8.s igzip/igzip_body.s erasure_code/gf_6vect_dot_prod_sse.s crc/crc64_jones_norm_by16_10.s crc/crc64_iso_norm_by8.s crc/crc64_jones_norm_by8.s erasure_code/gf_4vect_dot_prod_avx512.s crc/crc64_ecma_refl_by8.s igzip/igzip_update_histogram_01.s igzip/igzip_icf_body_h1_gr_bt.s erasure_code/gf_vect_dot_prod_avx512.s 00:03:22.592 10:13:35 build_native_dpdk -- common/autobuild_common.sh@148 -- $ make install 00:03:22.592 make --no-print-directory install-am 00:03:22.592 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib' 00:03:22.592 /bin/sh ./libtool --mode=install /usr/bin/install -c libisal.la '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib' 00:03:22.592 libtool: install: /usr/bin/install -c .libs/libisal.so.2.0.29 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.so.2.0.29 00:03:22.851 libtool: install: (cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib && { ln -s -f libisal.so.2.0.29 libisal.so.2 || { rm -f libisal.so.2 && ln -s libisal.so.2.0.29 libisal.so.2; }; }) 00:03:22.851 libtool: install: (cd /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib && { ln -s -f libisal.so.2.0.29 libisal.so || { rm -f libisal.so && ln -s libisal.so.2.0.29 libisal.so; }; }) 00:03:22.851 libtool: install: /usr/bin/install -c .libs/libisal.lai /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.la 00:03:22.851 libtool: install: /usr/bin/install -c .libs/libisal.a /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:03:22.851 libtool: install: chmod 644 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:03:22.851 libtool: install: ranlib /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/libisal.a 00:03:22.851 libtool: finish: PATH="/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin:/sbin" ldconfig -n /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:03:22.851 ---------------------------------------------------------------------- 00:03:22.851 Libraries have been installed in: 00:03:22.851 /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:03:22.851 00:03:22.851 If you ever happen to want to link against installed libraries 00:03:22.851 in a given directory, LIBDIR, you must either use libtool, and 00:03:22.851 specify the full pathname of the library, or use the '-LLIBDIR' 00:03:22.851 flag during linking and do at least one of the following: 00:03:22.851 - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable 00:03:22.851 during execution 00:03:22.851 - add LIBDIR to the 'LD_RUN_PATH' environment variable 00:03:22.852 during linking 00:03:22.852 - use the '-Wl,-rpath -Wl,LIBDIR' linker flag 00:03:22.852 - have your system administrator add LIBDIR to '/etc/ld.so.conf' 00:03:22.852 00:03:22.852 See any operating system documentation about shared libraries for 00:03:22.852 more information, such as the ld(1) and ld.so(8) manual pages. 00:03:22.852 ---------------------------------------------------------------------- 00:03:22.852 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin' 00:03:22.852 /bin/sh ./libtool --mode=install /usr/bin/install -c programs/igzip '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin' 00:03:23.111 libtool: install: /usr/bin/install -c programs/.libs/igzip /var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/bin/igzip 00:03:23.111 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/share/man/man1' 00:03:23.111 /usr/bin/install -c -m 644 programs/igzip.1 '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/share/man/man1' 00:03:23.111 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include' 00:03:23.111 /usr/bin/install -c -m 644 isa-l.h '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/.' 00:03:23.111 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig' 00:03:23.111 /usr/bin/install -c -m 644 libisal.pc '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig' 00:03:23.111 /usr/bin/mkdir -p '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/isa-l' 00:03:23.111 /usr/bin/install -c -m 644 include/test.h include/types.h include/crc.h include/crc64.h include/erasure_code.h include/gf_vect_mul.h include/igzip_lib.h include/mem_routines.h include/raid.h '/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/include/isa-l' 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@149 -- $ DPDK_DRIVERS+=("compress") 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@150 -- $ DPDK_DRIVERS+=("compress/isal") 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@151 -- $ DPDK_DRIVERS+=("compress/qat") 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@152 -- $ DPDK_DRIVERS+=("common/qat") 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@153 -- $ ge 22.11.4 21.02.0 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '>=' 21.02.0 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=>=' 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@364 -- $ return 0 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@156 -- $ test y = n 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@161 -- $ DPDK_DRIVERS+=("compress/mlx5") 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@163 -- $ export PKG_CONFIG_PATH=:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@163 -- $ PKG_CONFIG_PATH=:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib/pkgconfig 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@164 -- $ export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@164 -- $ LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/dpdk 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:23.111 10:13:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:03:23.111 10:13:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:03:23.112 10:13:35 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:03:23.112 10:13:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:23.112 patching file config/rte_config.h 00:03:23.112 Hunk #1 succeeded at 60 (offset 1 line). 00:03:23.112 10:13:36 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.112 10:13:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:03:23.371 10:13:36 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:23.371 patching file lib/pcapng/rte_pcapng.c 00:03:23.371 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base crypto crypto/ipsec_mb crypto/qat compress/qat common/qat bus/auxiliary common/mlx5 common/mlx5/linux crypto/mlx5 compress compress/isal compress/qat common/qat compress/mlx5 00:03:23.371 10:13:36 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false '-Dc_link_args= -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,compress,compress/isal,compress/qat,common/qat,compress/mlx5, 00:03:28.646 The Meson build system 00:03:28.646 Version: 1.3.1 00:03:28.646 Source dir: /var/jenkins/workspace/crypto-phy-autotest/dpdk 00:03:28.646 Build dir: /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp 00:03:28.646 Build type: native build 00:03:28.646 Program cat found: YES (/usr/bin/cat) 00:03:28.646 Project name: DPDK 00:03:28.646 Project version: 22.11.4 00:03:28.646 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:28.646 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:28.646 Host machine cpu family: x86_64 00:03:28.646 Host machine cpu: x86_64 00:03:28.646 Message: ## Building in Developer Mode ## 00:03:28.646 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:28.646 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:28.646 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:28.646 Program objdump found: YES (/usr/bin/objdump) 00:03:28.646 Program python3 found: YES (/usr/bin/python3) 00:03:28.646 Program cat found: YES (/usr/bin/cat) 00:03:28.646 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:28.646 Checking for size of "void *" : 8 00:03:28.646 Checking for size of "void *" : 8 (cached) 00:03:28.646 Library m found: YES 00:03:28.646 Library numa found: YES 00:03:28.646 Has header "numaif.h" : YES 00:03:28.646 Library fdt found: NO 00:03:28.646 Library execinfo found: NO 00:03:28.646 Has header "execinfo.h" : YES 00:03:28.646 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:28.646 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:28.646 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:28.646 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:28.646 Run-time dependency openssl found: YES 3.0.9 00:03:28.646 Run-time dependency libpcap found: YES 1.10.4 00:03:28.646 Has header "pcap.h" with dependency libpcap: YES 00:03:28.646 Compiler for C supports arguments -Wcast-qual: YES 00:03:28.646 Compiler for C supports arguments -Wdeprecated: YES 00:03:28.646 Compiler for C supports arguments -Wformat: YES 00:03:28.646 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:28.646 Compiler for C supports arguments -Wformat-security: NO 00:03:28.646 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:28.646 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:28.646 Compiler for C supports arguments -Wnested-externs: YES 00:03:28.646 Compiler for C supports arguments -Wold-style-definition: YES 00:03:28.646 Compiler for C supports arguments -Wpointer-arith: YES 00:03:28.646 Compiler for C supports arguments -Wsign-compare: YES 00:03:28.646 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:28.646 Compiler for C supports arguments -Wundef: YES 00:03:28.646 Compiler for C supports arguments -Wwrite-strings: YES 00:03:28.646 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:28.646 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:28.646 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:28.646 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:28.646 Compiler for C supports arguments -mavx512f: YES 00:03:28.646 Checking if "AVX512 checking" compiles: YES 00:03:28.646 Fetching value of define "__SSE4_2__" : 1 00:03:28.646 Fetching value of define "__AES__" : 1 00:03:28.646 Fetching value of define "__AVX__" : 1 00:03:28.646 Fetching value of define "__AVX2__" : 1 00:03:28.646 Fetching value of define "__AVX512BW__" : 1 00:03:28.646 Fetching value of define "__AVX512CD__" : 1 00:03:28.646 Fetching value of define "__AVX512DQ__" : 1 00:03:28.646 Fetching value of define "__AVX512F__" : 1 00:03:28.646 Fetching value of define "__AVX512VL__" : 1 00:03:28.646 Fetching value of define "__PCLMUL__" : 1 00:03:28.646 Fetching value of define "__RDRND__" : 1 00:03:28.646 Fetching value of define "__RDSEED__" : 1 00:03:28.646 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:28.646 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:28.646 Message: lib/kvargs: Defining dependency "kvargs" 00:03:28.646 Message: lib/telemetry: Defining dependency "telemetry" 00:03:28.646 Checking for function "getentropy" : YES 00:03:28.646 Message: lib/eal: Defining dependency "eal" 00:03:28.646 Message: lib/ring: Defining dependency "ring" 00:03:28.646 Message: lib/rcu: Defining dependency "rcu" 00:03:28.646 Message: lib/mempool: Defining dependency "mempool" 00:03:28.646 Message: lib/mbuf: Defining dependency "mbuf" 00:03:28.646 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:28.646 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.646 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:28.646 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:28.646 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:28.646 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:28.646 Compiler for C supports arguments -mpclmul: YES 00:03:28.646 Compiler for C supports arguments -maes: YES 00:03:28.646 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:28.646 Compiler for C supports arguments -mavx512bw: YES 00:03:28.646 Compiler for C supports arguments -mavx512dq: YES 00:03:28.646 Compiler for C supports arguments -mavx512vl: YES 00:03:28.646 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:28.646 Compiler for C supports arguments -mavx2: YES 00:03:28.647 Compiler for C supports arguments -mavx: YES 00:03:28.647 Message: lib/net: Defining dependency "net" 00:03:28.647 Message: lib/meter: Defining dependency "meter" 00:03:28.647 Message: lib/ethdev: Defining dependency "ethdev" 00:03:28.647 Message: lib/pci: Defining dependency "pci" 00:03:28.647 Message: lib/cmdline: Defining dependency "cmdline" 00:03:28.647 Message: lib/metrics: Defining dependency "metrics" 00:03:28.647 Message: lib/hash: Defining dependency "hash" 00:03:28.647 Message: lib/timer: Defining dependency "timer" 00:03:28.647 Fetching value of define "__AVX2__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:28.647 Message: lib/acl: Defining dependency "acl" 00:03:28.647 Message: lib/bbdev: Defining dependency "bbdev" 00:03:28.647 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:28.647 Run-time dependency libelf found: YES 0.190 00:03:28.647 Message: lib/bpf: Defining dependency "bpf" 00:03:28.647 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:28.647 Message: lib/compressdev: Defining dependency "compressdev" 00:03:28.647 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:28.647 Message: lib/distributor: Defining dependency "distributor" 00:03:28.647 Message: lib/efd: Defining dependency "efd" 00:03:28.647 Message: lib/eventdev: Defining dependency "eventdev" 00:03:28.647 Message: lib/gpudev: Defining dependency "gpudev" 00:03:28.647 Message: lib/gro: Defining dependency "gro" 00:03:28.647 Message: lib/gso: Defining dependency "gso" 00:03:28.647 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:28.647 Message: lib/jobstats: Defining dependency "jobstats" 00:03:28.647 Message: lib/latencystats: Defining dependency "latencystats" 00:03:28.647 Message: lib/lpm: Defining dependency "lpm" 00:03:28.647 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:28.647 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:28.647 Message: lib/member: Defining dependency "member" 00:03:28.647 Message: lib/pcapng: Defining dependency "pcapng" 00:03:28.647 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:28.647 Message: lib/power: Defining dependency "power" 00:03:28.647 Message: lib/rawdev: Defining dependency "rawdev" 00:03:28.647 Message: lib/regexdev: Defining dependency "regexdev" 00:03:28.647 Message: lib/dmadev: Defining dependency "dmadev" 00:03:28.647 Message: lib/rib: Defining dependency "rib" 00:03:28.647 Message: lib/reorder: Defining dependency "reorder" 00:03:28.647 Message: lib/sched: Defining dependency "sched" 00:03:28.647 Message: lib/security: Defining dependency "security" 00:03:28.647 Message: lib/stack: Defining dependency "stack" 00:03:28.647 Has header "linux/userfaultfd.h" : YES 00:03:28.647 Message: lib/vhost: Defining dependency "vhost" 00:03:28.647 Message: lib/ipsec: Defining dependency "ipsec" 00:03:28.647 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:28.647 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:28.647 Message: lib/fib: Defining dependency "fib" 00:03:28.647 Message: lib/port: Defining dependency "port" 00:03:28.647 Message: lib/pdump: Defining dependency "pdump" 00:03:28.647 Message: lib/table: Defining dependency "table" 00:03:28.647 Message: lib/pipeline: Defining dependency "pipeline" 00:03:28.647 Message: lib/graph: Defining dependency "graph" 00:03:28.647 Message: lib/node: Defining dependency "node" 00:03:28.647 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:28.647 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:03:28.647 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:28.647 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:28.647 Compiler for C supports arguments -std=c11: YES 00:03:28.647 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:03:28.647 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:03:28.647 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:03:28.647 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:03:33.933 Run-time dependency libmlx5 found: YES 1.24.44.0 00:03:33.933 Run-time dependency libibverbs found: YES 1.14.44.0 00:03:33.933 Library mtcr_ul found: NO 00:03:33.933 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_25000baseCR_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:03:33.933 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:03:35.883 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:03:35.883 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:03:35.883 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:03:35.883 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:03:35.883 Configuring mlx5_autoconf.h using configuration 00:03:35.883 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:03:35.883 Run-time dependency libcrypto found: YES 3.0.9 00:03:35.883 Library IPSec_MB found: YES 00:03:35.883 Dependency libcrypto found: YES 3.0.9 (cached) 00:03:35.883 Fetching value of define "IMB_VERSION_STR" : "1.0.0" 00:03:35.883 Message: drivers/common/qat: Defining dependency "common_qat" 00:03:35.883 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:35.883 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:35.883 Compiler for C supports arguments -Wno-unused-value: YES 00:03:35.883 Compiler for C supports arguments -Wno-format: YES 00:03:35.883 Compiler for C supports arguments -Wno-format-security: YES 00:03:35.883 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:35.883 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.883 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:35.883 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:35.883 Fetching value of define "__AVX2__" : 1 (cached) 00:03:35.883 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:35.883 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:35.883 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:35.883 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:35.883 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:35.883 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:35.883 Library IPSec_MB found: YES 00:03:35.883 Fetching value of define "IMB_VERSION_STR" : "1.0.0" (cached) 00:03:35.883 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:03:35.883 Compiler for C supports arguments -std=c11: YES (cached) 00:03:35.883 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:03:35.883 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:03:35.883 Run-time dependency libisal found: YES 2.29.0 00:03:35.883 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:03:35.883 Compiler for C supports arguments -std=c11: YES (cached) 00:03:35.883 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:03:35.883 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:03:35.883 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:03:35.883 Program doxygen found: YES (/usr/bin/doxygen) 00:03:35.883 Configuring doxy-api.conf using configuration 00:03:35.883 Program sphinx-build found: NO 00:03:35.883 Configuring rte_build_config.h using configuration 00:03:35.883 Message: 00:03:35.883 ================= 00:03:35.883 Applications Enabled 00:03:35.883 ================= 00:03:35.883 00:03:35.883 apps: 00:03:35.883 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:35.883 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:35.883 test-security-perf, 00:03:35.883 00:03:35.883 Message: 00:03:35.883 ================= 00:03:35.883 Libraries Enabled 00:03:35.883 ================= 00:03:35.883 00:03:35.883 libs: 00:03:35.883 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:35.883 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:35.883 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:35.883 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:35.883 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:35.883 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:35.883 table, pipeline, graph, node, 00:03:35.883 00:03:35.883 Message: 00:03:35.883 =============== 00:03:35.883 Drivers Enabled 00:03:35.883 =============== 00:03:35.883 00:03:35.883 common: 00:03:35.883 mlx5, qat, 00:03:35.883 bus: 00:03:35.883 auxiliary, pci, vdev, 00:03:35.883 mempool: 00:03:35.883 ring, 00:03:35.883 dma: 00:03:35.883 00:03:35.883 net: 00:03:35.883 i40e, 00:03:35.883 raw: 00:03:35.883 00:03:35.883 crypto: 00:03:35.883 ipsec_mb, mlx5, 00:03:35.883 compress: 00:03:35.883 isal, mlx5, 00:03:35.883 regex: 00:03:35.883 00:03:35.883 vdpa: 00:03:35.883 00:03:35.883 event: 00:03:35.883 00:03:35.883 baseband: 00:03:35.883 00:03:35.883 gpu: 00:03:35.883 00:03:35.883 00:03:35.883 Message: 00:03:35.883 ================= 00:03:35.883 Content Skipped 00:03:35.883 ================= 00:03:35.883 00:03:35.883 apps: 00:03:35.883 00:03:35.883 libs: 00:03:35.883 kni: explicitly disabled via build config (deprecated lib) 00:03:35.883 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:35.883 00:03:35.883 drivers: 00:03:35.883 common/cpt: not in enabled drivers build config 00:03:35.883 common/dpaax: not in enabled drivers build config 00:03:35.883 common/iavf: not in enabled drivers build config 00:03:35.883 common/idpf: not in enabled drivers build config 00:03:35.883 common/mvep: not in enabled drivers build config 00:03:35.883 common/octeontx: not in enabled drivers build config 00:03:35.883 bus/dpaa: not in enabled drivers build config 00:03:35.883 bus/fslmc: not in enabled drivers build config 00:03:35.883 bus/ifpga: not in enabled drivers build config 00:03:35.883 bus/vmbus: not in enabled drivers build config 00:03:35.883 common/cnxk: not in enabled drivers build config 00:03:35.883 common/sfc_efx: not in enabled drivers build config 00:03:35.883 mempool/bucket: not in enabled drivers build config 00:03:35.883 mempool/cnxk: not in enabled drivers build config 00:03:35.883 mempool/dpaa: not in enabled drivers build config 00:03:35.883 mempool/dpaa2: not in enabled drivers build config 00:03:35.883 mempool/octeontx: not in enabled drivers build config 00:03:35.883 mempool/stack: not in enabled drivers build config 00:03:35.883 dma/cnxk: not in enabled drivers build config 00:03:35.883 dma/dpaa: not in enabled drivers build config 00:03:35.883 dma/dpaa2: not in enabled drivers build config 00:03:35.883 dma/hisilicon: not in enabled drivers build config 00:03:35.883 dma/idxd: not in enabled drivers build config 00:03:35.883 dma/ioat: not in enabled drivers build config 00:03:35.883 dma/skeleton: not in enabled drivers build config 00:03:35.883 net/af_packet: not in enabled drivers build config 00:03:35.883 net/af_xdp: not in enabled drivers build config 00:03:35.883 net/ark: not in enabled drivers build config 00:03:35.883 net/atlantic: not in enabled drivers build config 00:03:35.883 net/avp: not in enabled drivers build config 00:03:35.883 net/axgbe: not in enabled drivers build config 00:03:35.883 net/bnx2x: not in enabled drivers build config 00:03:35.883 net/bnxt: not in enabled drivers build config 00:03:35.883 net/bonding: not in enabled drivers build config 00:03:35.883 net/cnxk: not in enabled drivers build config 00:03:35.883 net/cxgbe: not in enabled drivers build config 00:03:35.883 net/dpaa: not in enabled drivers build config 00:03:35.883 net/dpaa2: not in enabled drivers build config 00:03:35.883 net/e1000: not in enabled drivers build config 00:03:35.883 net/ena: not in enabled drivers build config 00:03:35.883 net/enetc: not in enabled drivers build config 00:03:35.883 net/enetfec: not in enabled drivers build config 00:03:35.884 net/enic: not in enabled drivers build config 00:03:35.884 net/failsafe: not in enabled drivers build config 00:03:35.884 net/fm10k: not in enabled drivers build config 00:03:35.884 net/gve: not in enabled drivers build config 00:03:35.884 net/hinic: not in enabled drivers build config 00:03:35.884 net/hns3: not in enabled drivers build config 00:03:35.884 net/iavf: not in enabled drivers build config 00:03:35.884 net/ice: not in enabled drivers build config 00:03:35.884 net/idpf: not in enabled drivers build config 00:03:35.884 net/igc: not in enabled drivers build config 00:03:35.884 net/ionic: not in enabled drivers build config 00:03:35.884 net/ipn3ke: not in enabled drivers build config 00:03:35.884 net/ixgbe: not in enabled drivers build config 00:03:35.884 net/kni: not in enabled drivers build config 00:03:35.884 net/liquidio: not in enabled drivers build config 00:03:35.884 net/mana: not in enabled drivers build config 00:03:35.884 net/memif: not in enabled drivers build config 00:03:35.884 net/mlx4: not in enabled drivers build config 00:03:35.884 net/mlx5: not in enabled drivers build config 00:03:35.884 net/mvneta: not in enabled drivers build config 00:03:35.884 net/mvpp2: not in enabled drivers build config 00:03:35.884 net/netvsc: not in enabled drivers build config 00:03:35.884 net/nfb: not in enabled drivers build config 00:03:35.884 net/nfp: not in enabled drivers build config 00:03:35.884 net/ngbe: not in enabled drivers build config 00:03:35.884 net/null: not in enabled drivers build config 00:03:35.884 net/octeontx: not in enabled drivers build config 00:03:35.884 net/octeon_ep: not in enabled drivers build config 00:03:35.884 net/pcap: not in enabled drivers build config 00:03:35.884 net/pfe: not in enabled drivers build config 00:03:35.884 net/qede: not in enabled drivers build config 00:03:35.884 net/ring: not in enabled drivers build config 00:03:35.884 net/sfc: not in enabled drivers build config 00:03:35.884 net/softnic: not in enabled drivers build config 00:03:35.884 net/tap: not in enabled drivers build config 00:03:35.884 net/thunderx: not in enabled drivers build config 00:03:35.884 net/txgbe: not in enabled drivers build config 00:03:35.884 net/vdev_netvsc: not in enabled drivers build config 00:03:35.884 net/vhost: not in enabled drivers build config 00:03:35.884 net/virtio: not in enabled drivers build config 00:03:35.884 net/vmxnet3: not in enabled drivers build config 00:03:35.884 raw/cnxk_bphy: not in enabled drivers build config 00:03:35.884 raw/cnxk_gpio: not in enabled drivers build config 00:03:35.884 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:35.884 raw/ifpga: not in enabled drivers build config 00:03:35.884 raw/ntb: not in enabled drivers build config 00:03:35.884 raw/skeleton: not in enabled drivers build config 00:03:35.884 crypto/armv8: not in enabled drivers build config 00:03:35.884 crypto/bcmfs: not in enabled drivers build config 00:03:35.884 crypto/caam_jr: not in enabled drivers build config 00:03:35.884 crypto/ccp: not in enabled drivers build config 00:03:35.884 crypto/cnxk: not in enabled drivers build config 00:03:35.884 crypto/dpaa_sec: not in enabled drivers build config 00:03:35.884 crypto/dpaa2_sec: not in enabled drivers build config 00:03:35.884 crypto/mvsam: not in enabled drivers build config 00:03:35.884 crypto/nitrox: not in enabled drivers build config 00:03:35.884 crypto/null: not in enabled drivers build config 00:03:35.884 crypto/octeontx: not in enabled drivers build config 00:03:35.884 crypto/openssl: not in enabled drivers build config 00:03:35.884 crypto/scheduler: not in enabled drivers build config 00:03:35.884 crypto/uadk: not in enabled drivers build config 00:03:35.884 crypto/virtio: not in enabled drivers build config 00:03:35.884 compress/octeontx: not in enabled drivers build config 00:03:35.884 compress/zlib: not in enabled drivers build config 00:03:35.884 regex/mlx5: not in enabled drivers build config 00:03:35.884 regex/cn9k: not in enabled drivers build config 00:03:35.884 vdpa/ifc: not in enabled drivers build config 00:03:35.884 vdpa/mlx5: not in enabled drivers build config 00:03:35.884 vdpa/sfc: not in enabled drivers build config 00:03:35.884 event/cnxk: not in enabled drivers build config 00:03:35.884 event/dlb2: not in enabled drivers build config 00:03:35.884 event/dpaa: not in enabled drivers build config 00:03:35.884 event/dpaa2: not in enabled drivers build config 00:03:35.884 event/dsw: not in enabled drivers build config 00:03:35.884 event/opdl: not in enabled drivers build config 00:03:35.884 event/skeleton: not in enabled drivers build config 00:03:35.884 event/sw: not in enabled drivers build config 00:03:35.884 event/octeontx: not in enabled drivers build config 00:03:35.884 baseband/acc: not in enabled drivers build config 00:03:35.884 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:35.884 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:35.884 baseband/la12xx: not in enabled drivers build config 00:03:35.884 baseband/null: not in enabled drivers build config 00:03:35.884 baseband/turbo_sw: not in enabled drivers build config 00:03:35.884 gpu/cuda: not in enabled drivers build config 00:03:35.884 00:03:35.884 00:03:35.884 Build targets in project: 355 00:03:35.884 00:03:35.884 DPDK 22.11.4 00:03:35.884 00:03:35.884 User defined options 00:03:35.884 libdir : lib 00:03:35.884 prefix : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:03:35.884 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow -I/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:03:35.884 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:03:35.884 enable_docs : false 00:03:35.884 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,crypto,crypto/ipsec_mb,crypto/qat,compress/qat,common/qat,bus/auxiliary,common/mlx5,common/mlx5/linux,crypto/mlx5,compress,compress/isal,compress/qat,common/qat,compress/mlx5, 00:03:35.884 enable_kmods : false 00:03:35.884 machine : native 00:03:35.884 tests : false 00:03:35.884 00:03:35.884 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.884 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:35.884 10:13:48 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp -j112 00:03:35.884 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp' 00:03:35.884 [1/854] Generating lib/rte_kvargs_def with a custom command 00:03:35.884 [2/854] Generating lib/rte_kvargs_mingw with a custom command 00:03:35.884 [3/854] Generating lib/rte_telemetry_def with a custom command 00:03:35.884 [4/854] Generating lib/rte_telemetry_mingw with a custom command 00:03:35.884 [5/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:35.884 [6/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:35.884 [7/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:35.884 [8/854] Generating lib/rte_rcu_def with a custom command 00:03:35.884 [9/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:35.884 [10/854] Generating lib/rte_eal_mingw with a custom command 00:03:35.884 [11/854] Generating lib/rte_ring_mingw with a custom command 00:03:35.884 [12/854] Generating lib/rte_eal_def with a custom command 00:03:35.884 [13/854] Generating lib/rte_ring_def with a custom command 00:03:35.884 [14/854] Generating lib/rte_rcu_mingw with a custom command 00:03:35.884 [15/854] Generating lib/rte_mempool_def with a custom command 00:03:35.884 [16/854] Generating lib/rte_mbuf_mingw with a custom command 00:03:35.884 [17/854] Generating lib/rte_net_def with a custom command 00:03:35.884 [18/854] Generating lib/rte_net_mingw with a custom command 00:03:35.884 [19/854] Generating lib/rte_mempool_mingw with a custom command 00:03:35.884 [20/854] Generating lib/rte_mbuf_def with a custom command 00:03:35.884 [21/854] Generating lib/rte_meter_mingw with a custom command 00:03:35.884 [22/854] Generating lib/rte_meter_def with a custom command 00:03:35.884 [23/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:35.884 [24/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:35.884 [25/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:35.884 [26/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:35.884 [27/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:35.884 [28/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:36.150 [29/854] Generating lib/rte_pci_def with a custom command 00:03:36.150 [30/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:36.150 [31/854] Generating lib/rte_ethdev_mingw with a custom command 00:03:36.150 [32/854] Generating lib/rte_ethdev_def with a custom command 00:03:36.150 [33/854] Generating lib/rte_pci_mingw with a custom command 00:03:36.150 [34/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:36.150 [35/854] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:36.151 [36/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:36.151 [37/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:36.151 [38/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:36.151 [39/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:36.151 [40/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:36.151 [41/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:36.151 [42/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:36.151 [43/854] Linking static target lib/librte_kvargs.a 00:03:36.151 [44/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:36.151 [45/854] Generating lib/rte_cmdline_def with a custom command 00:03:36.151 [46/854] Generating lib/rte_metrics_mingw with a custom command 00:03:36.151 [47/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:36.151 [48/854] Generating lib/rte_cmdline_mingw with a custom command 00:03:36.151 [49/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:36.151 [50/854] Generating lib/rte_metrics_def with a custom command 00:03:36.151 [51/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:36.151 [52/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:36.151 [53/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:36.151 [54/854] Generating lib/rte_hash_mingw with a custom command 00:03:36.151 [55/854] Generating lib/rte_hash_def with a custom command 00:03:36.151 [56/854] Generating lib/rte_timer_def with a custom command 00:03:36.151 [57/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:36.151 [58/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:36.151 [59/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:36.151 [60/854] Generating lib/rte_timer_mingw with a custom command 00:03:36.151 [61/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:36.151 [62/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:36.151 [63/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:36.151 [64/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:36.151 [65/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:36.151 [66/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:36.151 [67/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:36.151 [68/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:36.151 [69/854] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:36.151 [70/854] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:36.151 [71/854] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:36.151 [72/854] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:36.151 [73/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:36.151 [74/854] Generating lib/rte_acl_mingw with a custom command 00:03:36.151 [75/854] Generating lib/rte_bbdev_def with a custom command 00:03:36.151 [76/854] Generating lib/rte_acl_def with a custom command 00:03:36.151 [77/854] Linking static target lib/librte_pci.a 00:03:36.151 [78/854] Generating lib/rte_bitratestats_mingw with a custom command 00:03:36.151 [79/854] Generating lib/rte_bbdev_mingw with a custom command 00:03:36.151 [80/854] Generating lib/rte_bitratestats_def with a custom command 00:03:36.151 [81/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:36.151 [82/854] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:36.151 [83/854] Generating lib/rte_bpf_mingw with a custom command 00:03:36.151 [84/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:36.151 [85/854] Generating lib/rte_bpf_def with a custom command 00:03:36.151 [86/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:36.151 [87/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:36.151 [88/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:36.151 [89/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:36.151 [90/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:36.151 [91/854] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:36.151 [92/854] Generating lib/rte_cfgfile_mingw with a custom command 00:03:36.151 [93/854] Generating lib/rte_cfgfile_def with a custom command 00:03:36.151 [94/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:36.151 [95/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:36.151 [96/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:36.151 [97/854] Generating lib/rte_compressdev_mingw with a custom command 00:03:36.151 [98/854] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:36.151 [99/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:36.409 [100/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:36.409 [101/854] Generating lib/rte_compressdev_def with a custom command 00:03:36.409 [102/854] Linking static target lib/librte_meter.a 00:03:36.409 [103/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:36.409 [104/854] Generating lib/rte_cryptodev_mingw with a custom command 00:03:36.409 [105/854] Linking static target lib/librte_ring.a 00:03:36.409 [106/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:36.409 [107/854] Generating lib/rte_cryptodev_def with a custom command 00:03:36.409 [108/854] Generating lib/rte_distributor_def with a custom command 00:03:36.409 [109/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:36.409 [110/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:36.409 [111/854] Generating lib/rte_distributor_mingw with a custom command 00:03:36.409 [112/854] Generating lib/rte_efd_mingw with a custom command 00:03:36.409 [113/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:36.409 [114/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:36.409 [115/854] Generating lib/rte_efd_def with a custom command 00:03:36.409 [116/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:36.409 [117/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:36.409 [118/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:36.409 [119/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:36.409 [120/854] Generating lib/rte_eventdev_mingw with a custom command 00:03:36.409 [121/854] Generating lib/rte_eventdev_def with a custom command 00:03:36.409 [122/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:36.409 [123/854] Generating lib/rte_gpudev_mingw with a custom command 00:03:36.409 [124/854] Generating lib/rte_gpudev_def with a custom command 00:03:36.409 [125/854] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:36.409 [126/854] Generating lib/rte_gro_def with a custom command 00:03:36.409 [127/854] Generating lib/rte_gro_mingw with a custom command 00:03:36.409 [128/854] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:36.409 [129/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:36.409 [130/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:36.409 [131/854] Generating lib/rte_gso_mingw with a custom command 00:03:36.409 [132/854] Generating lib/rte_gso_def with a custom command 00:03:36.669 [133/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:36.669 [134/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:36.669 [135/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:36.669 [136/854] Generating lib/rte_ip_frag_def with a custom command 00:03:36.669 [137/854] Generating lib/rte_ip_frag_mingw with a custom command 00:03:36.669 [138/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:36.669 [139/854] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.669 [140/854] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.669 [141/854] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:36.669 [142/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:36.669 [143/854] Linking static target lib/librte_cfgfile.a 00:03:36.669 [144/854] Generating lib/rte_jobstats_def with a custom command 00:03:36.669 [145/854] Generating lib/rte_jobstats_mingw with a custom command 00:03:36.669 [146/854] Linking target lib/librte_kvargs.so.23.0 00:03:36.669 [147/854] Generating lib/rte_latencystats_def with a custom command 00:03:36.669 [148/854] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:36.669 [149/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:36.669 [150/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:36.669 [151/854] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.669 [152/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:36.669 [153/854] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:36.669 [154/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:36.669 [155/854] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:36.669 [156/854] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:36.669 [157/854] Generating lib/rte_lpm_def with a custom command 00:03:36.669 [158/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:36.669 [159/854] Generating lib/rte_lpm_mingw with a custom command 00:03:36.669 [160/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:36.669 [161/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:36.932 [162/854] Generating lib/rte_member_def with a custom command 00:03:36.932 [163/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:36.932 [164/854] Generating lib/rte_latencystats_mingw with a custom command 00:03:36.932 [165/854] Generating lib/rte_member_mingw with a custom command 00:03:36.932 [166/854] Generating lib/rte_pcapng_def with a custom command 00:03:36.932 [167/854] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:36.932 [168/854] Generating lib/rte_pcapng_mingw with a custom command 00:03:36.932 [169/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:36.932 [170/854] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.932 [171/854] Linking static target lib/librte_jobstats.a 00:03:36.932 [172/854] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:36.932 [173/854] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:36.932 [174/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:36.932 [175/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:36.932 [176/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:36.932 [177/854] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:36.932 [178/854] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:36.932 [179/854] Generating lib/rte_rawdev_def with a custom command 00:03:36.932 [180/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:36.932 [181/854] Generating lib/rte_power_def with a custom command 00:03:36.932 [182/854] Generating lib/rte_power_mingw with a custom command 00:03:36.932 [183/854] Linking static target lib/librte_telemetry.a 00:03:36.932 [184/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:36.932 [185/854] Linking static target lib/librte_cmdline.a 00:03:36.932 [186/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:36.932 [187/854] Generating lib/rte_rawdev_mingw with a custom command 00:03:36.932 [188/854] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:36.932 [189/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:36.932 [190/854] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:36.932 [191/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:36.932 [192/854] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:36.932 [193/854] Linking static target lib/librte_timer.a 00:03:36.932 [194/854] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:36.932 [195/854] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:36.932 [196/854] Generating lib/rte_dmadev_def with a custom command 00:03:36.932 [197/854] Linking static target lib/librte_metrics.a 00:03:36.932 [198/854] Generating lib/rte_regexdev_mingw with a custom command 00:03:36.932 [199/854] Generating lib/rte_regexdev_def with a custom command 00:03:36.932 [200/854] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:36.932 [201/854] Generating lib/rte_dmadev_mingw with a custom command 00:03:36.932 [202/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:36.932 [203/854] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:36.932 [204/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:36.932 [205/854] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:36.932 [206/854] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:36.932 [207/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:36.932 [208/854] Generating lib/rte_rib_mingw with a custom command 00:03:36.932 [209/854] Generating lib/rte_rib_def with a custom command 00:03:36.932 [210/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:36.932 [211/854] Generating lib/rte_reorder_def with a custom command 00:03:36.932 [212/854] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:36.932 [213/854] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:36.932 [214/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:36.932 [215/854] Generating lib/rte_reorder_mingw with a custom command 00:03:36.932 [216/854] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.932 [217/854] Linking static target lib/librte_bitratestats.a 00:03:36.932 [218/854] Generating lib/rte_sched_mingw with a custom command 00:03:37.194 [219/854] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:37.194 [220/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:37.194 [221/854] Generating lib/rte_sched_def with a custom command 00:03:37.194 [222/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:37.194 [223/854] Generating lib/rte_security_mingw with a custom command 00:03:37.194 [224/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:37.194 [225/854] Linking static target lib/librte_net.a 00:03:37.194 [226/854] Generating lib/rte_security_def with a custom command 00:03:37.194 [227/854] Generating lib/rte_stack_mingw with a custom command 00:03:37.194 [228/854] Generating lib/rte_stack_def with a custom command 00:03:37.194 [229/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:37.194 [230/854] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:37.194 [231/854] Generating lib/rte_vhost_def with a custom command 00:03:37.194 [232/854] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:37.194 [233/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:37.194 [234/854] Generating lib/rte_vhost_mingw with a custom command 00:03:37.194 [235/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:37.194 [236/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:37.194 [237/854] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:37.194 [238/854] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:37.194 [239/854] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:37.194 [240/854] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:37.194 [241/854] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:37.194 [242/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:37.194 [243/854] Generating lib/rte_ipsec_def with a custom command 00:03:37.194 [244/854] Generating lib/rte_ipsec_mingw with a custom command 00:03:37.194 [245/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:37.194 [246/854] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:37.194 [247/854] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:37.194 [248/854] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:37.194 [249/854] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:37.194 [250/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.194 [251/854] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:37.194 [252/854] Linking static target lib/librte_stack.a 00:03:37.194 [253/854] Generating lib/rte_fib_def with a custom command 00:03:37.194 [254/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:37.194 [255/854] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:37.194 [256/854] Generating lib/rte_fib_mingw with a custom command 00:03:37.194 [257/854] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:37.194 [258/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:37.194 [259/854] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:37.194 [260/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:37.194 [261/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:37.194 [262/854] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:37.463 [263/854] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.463 [264/854] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:37.463 [265/854] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:37.463 [266/854] Generating lib/rte_port_def with a custom command 00:03:37.463 [267/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:37.463 [268/854] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:37.463 [269/854] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:37.463 [270/854] Generating lib/rte_port_mingw with a custom command 00:03:37.463 [271/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:37.463 [272/854] Generating lib/rte_pdump_def with a custom command 00:03:37.463 [273/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:37.463 [274/854] Generating lib/rte_pdump_mingw with a custom command 00:03:37.463 [275/854] Linking static target lib/librte_compressdev.a 00:03:37.463 [276/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:37.463 [277/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:37.463 [278/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:37.463 [279/854] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:37.463 [280/854] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.463 [281/854] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:37.463 [282/854] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:37.463 [283/854] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:37.463 [284/854] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.463 [285/854] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:37.463 [286/854] Linking static target lib/librte_latencystats.a 00:03:37.463 [287/854] Linking static target lib/librte_rcu.a 00:03:37.463 [288/854] Linking static target lib/librte_rawdev.a 00:03:37.463 [289/854] Generating lib/rte_table_def with a custom command 00:03:37.463 [290/854] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:37.463 [291/854] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:37.463 [292/854] Generating lib/rte_table_mingw with a custom command 00:03:37.463 [293/854] Linking static target lib/librte_mempool.a 00:03:37.463 [294/854] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:37.463 [295/854] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.463 [296/854] Linking static target lib/librte_bbdev.a 00:03:37.463 [297/854] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:37.463 [298/854] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:37.728 [299/854] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:37.728 [300/854] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:37.728 [301/854] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:37.728 [302/854] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:37.728 [303/854] Linking static target lib/librte_gro.a 00:03:37.728 [304/854] Linking static target lib/librte_gpudev.a 00:03:37.728 [305/854] Linking static target lib/librte_dmadev.a 00:03:37.728 [306/854] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:37.728 [307/854] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.728 [308/854] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:37.728 [309/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:37.728 [310/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:37.728 [311/854] Generating lib/rte_pipeline_def with a custom command 00:03:37.728 [312/854] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.728 [313/854] Generating lib/rte_pipeline_mingw with a custom command 00:03:37.728 [314/854] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.728 [315/854] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:37.728 [316/854] Linking target lib/librte_telemetry.so.23.0 00:03:37.728 [317/854] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:37.728 [318/854] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.729 [319/854] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:37.729 [320/854] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:37.729 [321/854] Linking static target lib/librte_gso.a 00:03:37.729 [322/854] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:37.729 [323/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:37.729 [324/854] Generating lib/rte_graph_def with a custom command 00:03:37.729 [325/854] Generating lib/rte_graph_mingw with a custom command 00:03:37.729 [326/854] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:37.729 [327/854] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:37.729 [328/854] Linking static target lib/librte_distributor.a 00:03:37.729 [329/854] Linking static target lib/librte_ip_frag.a 00:03:37.729 [330/854] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:37.992 [331/854] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:37.992 [332/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:37.992 [333/854] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:37.992 [334/854] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:37.992 [335/854] Linking static target lib/librte_regexdev.a 00:03:37.992 [336/854] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:37.992 [337/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:37.992 [338/854] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.992 [339/854] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:37.992 [340/854] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:37.992 [341/854] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:37.992 [342/854] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:37.992 [343/854] Generating lib/rte_node_def with a custom command 00:03:37.992 [344/854] Generating lib/rte_node_mingw with a custom command 00:03:37.992 [345/854] Linking static target lib/librte_eal.a 00:03:37.992 [346/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:37.992 [347/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:37.992 [348/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:37.992 [349/854] Generating drivers/rte_bus_auxiliary_def with a custom command 00:03:37.992 [350/854] Generating drivers/rte_bus_auxiliary_mingw with a custom command 00:03:37.992 [351/854] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.992 [352/854] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:37.992 [353/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:03:37.992 [354/854] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.992 [355/854] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:37.992 [356/854] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:37.992 [357/854] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:37.992 [358/854] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:37.992 [359/854] Generating drivers/rte_bus_pci_def with a custom command 00:03:37.992 [360/854] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:37.992 [361/854] Linking static target lib/librte_reorder.a 00:03:37.992 [362/854] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:38.254 [363/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:38.254 [364/854] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.254 [365/854] Generating drivers/rte_bus_vdev_def with a custom command 00:03:38.254 [366/854] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:38.254 [367/854] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:38.254 [368/854] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:38.254 [369/854] Linking static target lib/librte_security.a 00:03:38.254 [370/854] Linking static target lib/librte_pcapng.a 00:03:38.254 [371/854] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:38.254 [372/854] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:38.254 [373/854] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:38.254 [374/854] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:38.254 [375/854] Linking static target lib/librte_bpf.a 00:03:38.254 [376/854] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:38.254 [377/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:38.254 [378/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:03:38.254 [379/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:38.254 [380/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:38.254 [381/854] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.254 [382/854] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:38.254 [383/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:38.254 [384/854] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:38.254 [385/854] Generating drivers/rte_common_mlx5_def with a custom command 00:03:38.254 [386/854] Generating drivers/rte_common_mlx5_mingw with a custom command 00:03:38.254 [387/854] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.254 [388/854] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:38.254 [389/854] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:38.254 [390/854] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.254 [391/854] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:38.517 [392/854] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:38.517 [393/854] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:38.517 [394/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:03:38.517 [395/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:38.517 [396/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:38.517 [397/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.517 [398/854] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.517 [399/854] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:38.517 [400/854] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:38.517 [401/854] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.517 [402/854] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:38.517 [403/854] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:38.517 [404/854] Linking static target lib/librte_power.a 00:03:38.517 [405/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:38.517 [406/854] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:38.517 [407/854] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:38.517 [408/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:38.517 [409/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:38.517 [410/854] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.517 [411/854] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:38.517 [412/854] Generating drivers/rte_common_qat_def with a custom command 00:03:38.517 [413/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:38.517 [414/854] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:38.517 [415/854] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:38.517 [416/854] Generating drivers/rte_common_qat_mingw with a custom command 00:03:38.517 [417/854] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:38.517 [418/854] Linking static target lib/librte_rib.a 00:03:38.518 [419/854] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:38.518 [420/854] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:38.518 [421/854] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:38.518 [422/854] Generating drivers/rte_mempool_ring_def with a custom command 00:03:38.518 [423/854] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.518 [424/854] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:38.518 [425/854] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:38.518 [426/854] Linking static target lib/librte_lpm.a 00:03:38.780 [427/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:38.780 [428/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:38.780 [429/854] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:38.780 [430/854] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:03:38.780 [431/854] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:38.780 [432/854] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:38.780 [433/854] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:03:38.780 [434/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.780 [435/854] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:38.780 [436/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:03:38.780 [437/854] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:38.780 [438/854] Linking static target lib/librte_graph.a 00:03:38.780 [439/854] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:38.780 [440/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:38.780 [441/854] Linking static target lib/librte_efd.a 00:03:38.780 [442/854] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.780 [443/854] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.780 [444/854] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:38.780 [445/854] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:38.780 [446/854] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.780 [447/854] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:38.780 [448/854] Generating drivers/rte_net_i40e_def with a custom command 00:03:38.780 [449/854] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:38.780 [450/854] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:38.780 [451/854] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.780 [452/854] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.780 [453/854] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.043 [454/854] Linking static target drivers/librte_bus_vdev.a 00:03:39.043 [455/854] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:39.043 [456/854] Generating drivers/rte_crypto_ipsec_mb_def with a custom command 00:03:39.043 [457/854] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:39.043 [458/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:03:39.043 [459/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:39.043 [460/854] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:39.043 [461/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:03:39.043 [462/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:39.043 [463/854] Generating drivers/rte_crypto_ipsec_mb_mingw with a custom command 00:03:39.043 [464/854] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:39.043 [465/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:03:39.043 [466/854] Generating drivers/rte_crypto_mlx5_def with a custom command 00:03:39.043 [467/854] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:03:39.043 [468/854] Generating drivers/rte_crypto_mlx5_mingw with a custom command 00:03:39.043 [469/854] Linking static target lib/librte_fib.a 00:03:39.043 [470/854] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.043 [471/854] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:39.043 [472/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:03:39.043 [473/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:39.043 [474/854] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:03:39.043 [475/854] Compiling C object drivers/librte_bus_auxiliary.so.23.0.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:03:39.043 [476/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:03:39.043 [477/854] Generating drivers/rte_compress_isal_def with a custom command 00:03:39.043 [478/854] Generating drivers/rte_compress_isal_mingw with a custom command 00:03:39.043 [479/854] Linking static target drivers/librte_bus_auxiliary.a 00:03:39.043 [480/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:03:39.043 [481/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:03:39.043 [482/854] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.043 [483/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:03:39.043 [484/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:03:39.043 [485/854] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.043 [486/854] Generating drivers/rte_compress_mlx5_def with a custom command 00:03:39.043 [487/854] Generating drivers/rte_compress_mlx5_mingw with a custom command 00:03:39.305 [488/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:03:39.305 [489/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:03:39.305 [490/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:03:39.305 [491/854] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:39.305 [492/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:03:39.305 [493/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:03:39.305 [494/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:39.305 [495/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:03:39.305 [496/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:03:39.305 [497/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:03:39.305 [498/854] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:39.305 [499/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:39.305 [500/854] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.305 [501/854] Linking static target lib/librte_pdump.a 00:03:39.305 [502/854] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.305 [503/854] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:39.305 [504/854] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:39.305 [505/854] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.305 [506/854] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.305 [507/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:39.305 [508/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:39.305 [509/854] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:39.305 [510/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:39.305 [511/854] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.305 [512/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:03:39.305 [513/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:39.305 [514/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:39.305 [515/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:03:39.305 [516/854] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:39.570 [517/854] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.570 [518/854] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.570 [519/854] Linking static target drivers/librte_bus_pci.a 00:03:39.570 [520/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:03:39.570 [521/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:39.570 [522/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:03:39.570 [523/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:03:39.570 [524/854] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.570 [525/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:03:39.570 [526/854] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.570 [527/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:03:39.570 [528/854] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:39.570 [529/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:03:39.570 [530/854] Linking static target lib/librte_table.a 00:03:39.570 [531/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:03:39.570 [532/854] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:39.835 [533/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:39.835 [534/854] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.835 [535/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:03:39.835 [536/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:03:39.835 [537/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:39.835 [538/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:39.836 [539/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:39.836 [540/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:39.836 [541/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:39.836 [542/854] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.836 [543/854] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:40.095 [544/854] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:03:40.095 [545/854] Linking static target lib/librte_sched.a 00:03:40.096 [546/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:03:40.096 [547/854] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:40.096 [548/854] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:40.096 [549/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:40.096 [550/854] Linking static target lib/librte_cryptodev.a 00:03:40.096 [551/854] Linking static target lib/librte_eventdev.a 00:03:40.096 [552/854] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:03:40.096 [553/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:40.096 [554/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:40.096 [555/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:03:40.096 [556/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:40.096 [557/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:40.096 [558/854] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:40.096 [559/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:40.096 [560/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:40.096 [561/854] Linking static target lib/librte_node.a 00:03:40.358 [562/854] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:40.358 [563/854] Linking static target lib/librte_ipsec.a 00:03:40.358 [564/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:03:40.358 [565/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:40.358 [566/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:03:40.358 [567/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:40.358 [568/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:40.358 [569/854] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.358 [570/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:40.358 [571/854] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:03:40.358 [572/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:40.358 [573/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:40.358 [574/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:40.358 [575/854] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:03:40.358 [576/854] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:40.358 [577/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:40.358 [578/854] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:40.358 [579/854] Linking static target lib/librte_ethdev.a 00:03:40.358 [580/854] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:40.358 [581/854] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:40.358 [582/854] Linking static target lib/librte_mbuf.a 00:03:40.358 [583/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:40.618 [584/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:40.618 [585/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:40.618 [586/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:40.618 [587/854] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:40.618 [588/854] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:40.618 [589/854] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:40.618 [590/854] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:03:40.618 [591/854] Linking static target lib/librte_member.a 00:03:40.618 [592/854] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.618 [593/854] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.618 [594/854] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:03:40.618 [595/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:40.618 [596/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:40.618 [597/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:40.618 [598/854] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:40.618 [599/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:40.618 [600/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:40.618 [601/854] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:40.618 [602/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:40.618 [603/854] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.618 [604/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:40.618 [605/854] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:40.618 [606/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:40.618 [607/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:40.618 [608/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:40.618 [609/854] Linking static target lib/librte_port.a 00:03:40.618 [610/854] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:03:40.618 [611/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:40.618 [612/854] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.618 [613/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:40.877 [614/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:03:40.877 [615/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:40.877 [616/854] Compiling C object drivers/librte_crypto_mlx5.so.23.0.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:03:40.877 [617/854] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:03:40.877 [618/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:40.877 [619/854] Linking static target drivers/librte_crypto_mlx5.a 00:03:40.877 [620/854] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.877 [621/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:40.877 [622/854] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:40.877 [623/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:40.877 [624/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:40.877 [625/854] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.877 [626/854] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.877 [627/854] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:03:40.877 [628/854] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.877 [629/854] Linking static target drivers/librte_mempool_ring.a 00:03:40.877 [630/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:03:40.877 [631/854] Compiling C object drivers/librte_compress_mlx5.so.23.0.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:03:40.877 [632/854] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:03:40.877 [633/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:40.877 [634/854] Linking static target drivers/librte_compress_mlx5.a 00:03:40.877 [635/854] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:03:40.877 [636/854] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:40.877 [637/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:03:40.877 [638/854] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.877 [639/854] Linking static target drivers/libtmp_rte_compress_isal.a 00:03:40.877 [640/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:03:40.877 [641/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:40.877 [642/854] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:41.136 [643/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:41.136 [644/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:41.136 [645/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:41.136 [646/854] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:41.136 [647/854] Linking static target lib/librte_hash.a 00:03:41.136 [648/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:41.136 [649/854] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:41.136 [650/854] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:41.136 [651/854] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:41.136 [652/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:41.136 [653/854] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:41.137 [654/854] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:41.137 [655/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:03:41.137 [656/854] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:03:41.137 [657/854] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:41.137 [658/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:41.137 [659/854] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:03:41.137 [660/854] Linking static target drivers/librte_compress_isal.a 00:03:41.137 [661/854] Compiling C object drivers/librte_compress_isal.so.23.0.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:03:41.137 [662/854] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:41.137 [663/854] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:41.137 [664/854] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.137 [665/854] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:41.137 [666/854] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:41.137 [667/854] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:41.137 [668/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:41.137 [669/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:41.396 [670/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:03:41.396 [671/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:41.396 [672/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:41.396 [673/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:41.396 [674/854] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:41.396 [675/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:41.396 [676/854] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:41.396 [677/854] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:41.396 [678/854] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:41.396 [679/854] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:41.396 [680/854] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:41.396 [681/854] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:03:41.396 [682/854] Linking static target drivers/libtmp_rte_common_mlx5.a 00:03:41.396 [683/854] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:41.396 [684/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:41.396 [685/854] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:41.655 [686/854] Linking static target lib/librte_acl.a 00:03:41.655 [687/854] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.655 [688/854] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:03:41.655 [689/854] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:03:41.655 [690/854] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:41.655 [691/854] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:41.655 [692/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:03:41.655 [693/854] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:41.655 [694/854] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:41.655 [695/854] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:41.655 [696/854] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:03:41.914 [697/854] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:41.914 [698/854] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:03:41.914 [699/854] Compiling C object drivers/librte_common_mlx5.so.23.0.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:03:41.914 [700/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:41.914 [701/854] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:03:41.914 [702/854] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:41.914 [703/854] Linking static target drivers/librte_common_mlx5.a 00:03:41.914 [704/854] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:03:41.914 [705/854] Compiling C object drivers/librte_crypto_ipsec_mb.so.23.0.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:03:41.914 [706/854] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:41.914 [707/854] Linking static target drivers/librte_crypto_ipsec_mb.a 00:03:41.914 [708/854] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:41.914 [709/854] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.173 [710/854] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.173 [711/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:03:42.173 [712/854] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:42.432 [713/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:42.692 [714/854] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:42.692 [715/854] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:43.260 [716/854] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:43.260 [717/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:43.519 [718/854] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.519 [719/854] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:43.778 [720/854] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.778 [721/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:44.037 [722/854] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:44.037 [723/854] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:44.295 [724/854] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:03:44.295 [725/854] Linking static target drivers/libtmp_rte_common_qat.a 00:03:44.554 [726/854] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:44.554 [727/854] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:44.554 [728/854] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:44.554 [729/854] Linking static target drivers/librte_net_i40e.a 00:03:44.554 [730/854] Generating drivers/rte_common_qat.pmd.c with a custom command 00:03:44.813 [731/854] Compiling C object drivers/librte_common_qat.so.23.0.p/meson-generated_.._rte_common_qat.pmd.c.o 00:03:44.813 [732/854] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:03:44.813 [733/854] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:44.813 [734/854] Linking static target drivers/librte_common_qat.a 00:03:45.749 [735/854] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:45.749 [736/854] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.652 [737/854] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.581 [738/854] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.872 [739/854] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.872 [740/854] Linking target lib/librte_eal.so.23.0 00:03:53.131 [741/854] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:53.131 [742/854] Linking target lib/librte_pci.so.23.0 00:03:53.131 [743/854] Linking target lib/librte_meter.so.23.0 00:03:53.131 [744/854] Linking target lib/librte_timer.so.23.0 00:03:53.131 [745/854] Linking target lib/librte_ring.so.23.0 00:03:53.131 [746/854] Linking target lib/librte_stack.so.23.0 00:03:53.131 [747/854] Linking target lib/librte_dmadev.so.23.0 00:03:53.131 [748/854] Linking target lib/librte_jobstats.so.23.0 00:03:53.131 [749/854] Linking target lib/librte_cfgfile.so.23.0 00:03:53.131 [750/854] Linking target lib/librte_rawdev.so.23.0 00:03:53.131 [751/854] Linking target lib/librte_graph.so.23.0 00:03:53.131 [752/854] Linking target drivers/librte_bus_auxiliary.so.23.0 00:03:53.131 [753/854] Linking target drivers/librte_bus_vdev.so.23.0 00:03:53.131 [754/854] Linking target lib/librte_acl.so.23.0 00:03:53.131 [755/854] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:53.131 [756/854] Linking static target lib/librte_vhost.a 00:03:53.131 [757/854] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:53.131 [758/854] Generating symbol file drivers/librte_bus_auxiliary.so.23.0.p/librte_bus_auxiliary.so.23.0.symbols 00:03:53.131 [759/854] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:53.131 [760/854] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:53.131 [761/854] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:53.131 [762/854] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:53.131 [763/854] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:53.131 [764/854] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:53.131 [765/854] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:53.131 [766/854] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:53.390 [767/854] Linking target drivers/librte_bus_pci.so.23.0 00:03:53.390 [768/854] Linking static target lib/librte_pipeline.a 00:03:53.390 [769/854] Linking target lib/librte_mempool.so.23.0 00:03:53.390 [770/854] Linking target lib/librte_rcu.so.23.0 00:03:53.390 [771/854] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:53.390 [772/854] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:53.390 [773/854] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:53.390 [774/854] Linking target lib/librte_mbuf.so.23.0 00:03:53.390 [775/854] Linking target drivers/librte_mempool_ring.so.23.0 00:03:53.390 [776/854] Linking target lib/librte_rib.so.23.0 00:03:53.650 [777/854] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:53.650 [778/854] Linking target lib/librte_cryptodev.so.23.0 00:03:53.650 [779/854] Linking target lib/librte_distributor.so.23.0 00:03:53.650 [780/854] Linking target lib/librte_net.so.23.0 00:03:53.650 [781/854] Linking target lib/librte_compressdev.so.23.0 00:03:53.650 [782/854] Linking target lib/librte_bbdev.so.23.0 00:03:53.650 [783/854] Linking target lib/librte_reorder.so.23.0 00:03:53.650 [784/854] Linking target lib/librte_regexdev.so.23.0 00:03:53.650 [785/854] Linking target lib/librte_gpudev.so.23.0 00:03:53.650 [786/854] Linking target lib/librte_sched.so.23.0 00:03:53.909 [787/854] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:53.909 [788/854] Linking target lib/librte_fib.so.23.0 00:03:53.909 [789/854] Linking target app/dpdk-testpmd 00:03:53.909 [790/854] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:53.909 [791/854] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:53.909 [792/854] Linking target app/dpdk-test-cmdline 00:03:53.909 [793/854] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:53.909 [794/854] Linking target app/dpdk-dumpcap 00:03:53.909 [795/854] Linking target app/dpdk-test-acl 00:03:53.909 [796/854] Generating symbol file lib/librte_compressdev.so.23.0.p/librte_compressdev.so.23.0.symbols 00:03:53.909 [797/854] Linking target app/dpdk-test-fib 00:03:53.909 [798/854] Linking target app/dpdk-pdump 00:03:53.909 [799/854] Linking target app/dpdk-test-regex 00:03:53.909 [800/854] Linking target app/dpdk-proc-info 00:03:53.909 [801/854] Linking target app/dpdk-test-gpudev 00:03:53.909 [802/854] Linking target app/dpdk-test-flow-perf 00:03:53.909 [803/854] Linking target app/dpdk-test-compress-perf 00:03:53.909 [804/854] Linking target app/dpdk-test-sad 00:03:53.909 [805/854] Linking target app/dpdk-test-pipeline 00:03:53.909 [806/854] Linking target app/dpdk-test-security-perf 00:03:53.909 [807/854] Linking target app/dpdk-test-crypto-perf 00:03:53.909 [808/854] Linking target app/dpdk-test-bbdev 00:03:53.909 [809/854] Linking target lib/librte_security.so.23.0 00:03:53.909 [810/854] Linking target lib/librte_cmdline.so.23.0 00:03:53.909 [811/854] Linking target lib/librte_hash.so.23.0 00:03:53.909 [812/854] Linking target lib/librte_ethdev.so.23.0 00:03:53.909 [813/854] Linking target drivers/librte_compress_isal.so.23.0 00:03:53.909 [814/854] Linking target app/dpdk-test-eventdev 00:03:54.168 [815/854] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:54.168 [816/854] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:54.168 [817/854] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:54.168 [818/854] Linking target drivers/librte_crypto_ipsec_mb.so.23.0 00:03:54.168 [819/854] Linking target lib/librte_efd.so.23.0 00:03:54.168 [820/854] Linking target lib/librte_lpm.so.23.0 00:03:54.168 [821/854] Linking target lib/librte_metrics.so.23.0 00:03:54.168 [822/854] Linking target lib/librte_pcapng.so.23.0 00:03:54.168 [823/854] Linking target lib/librte_member.so.23.0 00:03:54.168 [824/854] Linking target lib/librte_gso.so.23.0 00:03:54.168 [825/854] Linking target lib/librte_ipsec.so.23.0 00:03:54.168 [826/854] Linking target lib/librte_ip_frag.so.23.0 00:03:54.168 [827/854] Linking target lib/librte_power.so.23.0 00:03:54.168 [828/854] Linking target lib/librte_gro.so.23.0 00:03:54.168 [829/854] Linking target lib/librte_bpf.so.23.0 00:03:54.168 [830/854] Linking target drivers/librte_common_mlx5.so.23.0 00:03:54.168 [831/854] Linking target lib/librte_eventdev.so.23.0 00:03:54.168 [832/854] Linking target drivers/librte_common_qat.so.23.0 00:03:54.168 [833/854] Linking target drivers/librte_net_i40e.so.23.0 00:03:54.427 [834/854] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:54.427 [835/854] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:54.427 [836/854] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:54.427 [837/854] Generating symbol file drivers/librte_common_mlx5.so.23.0.p/librte_common_mlx5.so.23.0.symbols 00:03:54.427 [838/854] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:54.427 [839/854] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:54.427 [840/854] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:54.427 [841/854] Linking target lib/librte_bitratestats.so.23.0 00:03:54.427 [842/854] Linking target drivers/librte_crypto_mlx5.so.23.0 00:03:54.427 [843/854] Linking target drivers/librte_compress_mlx5.so.23.0 00:03:54.427 [844/854] Linking target lib/librte_node.so.23.0 00:03:54.427 [845/854] Linking target lib/librte_pdump.so.23.0 00:03:54.427 [846/854] Linking target lib/librte_latencystats.so.23.0 00:03:54.427 [847/854] Linking target lib/librte_port.so.23.0 00:03:54.686 [848/854] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:54.686 [849/854] Linking target lib/librte_table.so.23.0 00:03:54.946 [850/854] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:55.515 [851/854] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.515 [852/854] Linking target lib/librte_vhost.so.23.0 00:03:58.813 [853/854] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.813 [854/854] Linking target lib/librte_pipeline.so.23.0 00:03:58.813 10:14:11 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:58.813 10:14:11 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:58.813 10:14:11 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp -j112 install 00:03:58.813 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp' 00:03:58.813 [0/1] Installing files. 00:03:59.076 Installing subdir /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:59.076 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.077 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.078 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.079 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.080 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.081 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:59.082 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:59.082 Installing lib/librte_kvargs.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_telemetry.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_eal.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_ring.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_rcu.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_mempool.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_mbuf.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_net.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_meter.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_ethdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_pci.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_cmdline.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_metrics.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_hash.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.082 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_timer.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_acl.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bbdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bpf.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_compressdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_distributor.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_efd.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_eventdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gpudev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gro.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gso.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_jobstats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_latencystats.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_lpm.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_member.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pcapng.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_power.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_rawdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_regexdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_dmadev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_rib.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_reorder.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_sched.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_security.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_stack.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_vhost.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_ipsec.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_fib.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_port.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pdump.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_table.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pipeline.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.342 Installing lib/librte_graph.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing lib/librte_node.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing drivers/librte_bus_auxiliary.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing drivers/librte_bus_auxiliary.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.343 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.343 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.343 Installing drivers/librte_common_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.343 Installing drivers/librte_common_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_common_qat.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_common_qat.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_crypto_ipsec_mb.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_crypto_ipsec_mb.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_crypto_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_crypto_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_compress_isal.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_compress_isal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing drivers/librte_compress_mlx5.a to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:03:59.605 Installing drivers/librte_compress_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:59.605 Installing app/dpdk-dumpcap to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-pdump to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-proc-info to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-acl to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-fib to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-testpmd to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-regex to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-sad to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include/generic 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.605 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.606 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.607 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.608 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/bin 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig 00:03:59.609 Installing /var/jenkins/workspace/crypto-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig 00:03:59.609 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:59.609 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:59.609 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:59.609 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:59.609 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:59.609 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:59.609 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:59.609 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:59.609 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:59.609 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:59.609 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:59.609 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:59.609 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:59.609 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:59.609 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:59.609 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_net.so 00:03:59.609 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:59.609 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:59.610 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:59.610 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:59.610 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:59.610 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:59.610 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:59.610 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:59.610 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:59.610 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:59.610 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:59.610 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:59.610 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:59.610 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:59.610 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:59.610 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:59.610 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:59.610 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:59.610 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:59.610 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:59.610 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:59.610 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:59.610 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:59.610 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:59.610 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:59.610 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:59.610 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:59.610 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:59.610 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:59.610 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:59.610 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:59.610 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:59.610 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:59.610 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:59.610 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:59.610 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:59.610 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:59.610 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:59.610 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:59.610 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:59.610 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:59.610 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:59.610 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:59.610 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:59.610 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:59.610 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:59.610 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:59.610 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:59.610 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:59.610 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_member.so 00:03:59.610 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:59.610 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:59.610 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:59.610 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_power.so 00:03:59.610 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:59.610 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:59.610 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:59.610 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:59.610 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:59.610 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:59.610 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:59.610 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:59.610 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:59.610 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:59.610 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:59.610 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:59.610 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:59.610 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_security.so 00:03:59.611 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:59.611 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:59.611 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:59.611 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:59.611 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:59.611 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:59.611 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:59.611 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:59.611 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:59.611 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_port.so 00:03:59.611 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:59.611 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:59.611 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:59.611 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_table.so 00:03:59.611 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:59.611 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:59.611 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:59.611 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:59.611 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:59.611 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/librte_node.so 00:03:59.611 Installing symlink pointing to librte_bus_auxiliary.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so.23 00:03:59.611 Installing symlink pointing to librte_bus_auxiliary.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so 00:03:59.611 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:59.611 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:59.611 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:59.611 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:59.611 Installing symlink pointing to librte_common_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so.23 00:03:59.611 Installing symlink pointing to librte_common_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so 00:03:59.611 Installing symlink pointing to librte_common_qat.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so.23 00:03:59.611 Installing symlink pointing to librte_common_qat.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so 00:03:59.611 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:59.611 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:59.611 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:59.611 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:59.611 './librte_bus_auxiliary.so' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so' 00:03:59.611 './librte_bus_auxiliary.so.23' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so.23' 00:03:59.611 './librte_bus_auxiliary.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_auxiliary.so.23.0' 00:03:59.611 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:59.611 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:59.611 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:59.611 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:59.611 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:59.611 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:59.611 './librte_common_mlx5.so' -> 'dpdk/pmds-23.0/librte_common_mlx5.so' 00:03:59.611 './librte_common_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_common_mlx5.so.23' 00:03:59.611 './librte_common_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_common_mlx5.so.23.0' 00:03:59.611 './librte_common_qat.so' -> 'dpdk/pmds-23.0/librte_common_qat.so' 00:03:59.611 './librte_common_qat.so.23' -> 'dpdk/pmds-23.0/librte_common_qat.so.23' 00:03:59.611 './librte_common_qat.so.23.0' -> 'dpdk/pmds-23.0/librte_common_qat.so.23.0' 00:03:59.611 './librte_compress_isal.so' -> 'dpdk/pmds-23.0/librte_compress_isal.so' 00:03:59.611 './librte_compress_isal.so.23' -> 'dpdk/pmds-23.0/librte_compress_isal.so.23' 00:03:59.611 './librte_compress_isal.so.23.0' -> 'dpdk/pmds-23.0/librte_compress_isal.so.23.0' 00:03:59.611 './librte_compress_mlx5.so' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so' 00:03:59.611 './librte_compress_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so.23' 00:03:59.611 './librte_compress_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_compress_mlx5.so.23.0' 00:03:59.611 './librte_crypto_ipsec_mb.so' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so' 00:03:59.611 './librte_crypto_ipsec_mb.so.23' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23' 00:03:59.611 './librte_crypto_ipsec_mb.so.23.0' -> 'dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23.0' 00:03:59.611 './librte_crypto_mlx5.so' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so' 00:03:59.611 './librte_crypto_mlx5.so.23' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so.23' 00:03:59.611 './librte_crypto_mlx5.so.23.0' -> 'dpdk/pmds-23.0/librte_crypto_mlx5.so.23.0' 00:03:59.611 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:59.611 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:59.611 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:59.611 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:59.611 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:59.611 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:59.611 Installing symlink pointing to librte_crypto_ipsec_mb.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23 00:03:59.611 Installing symlink pointing to librte_crypto_ipsec_mb.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so 00:03:59.611 Installing symlink pointing to librte_crypto_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so.23 00:03:59.611 Installing symlink pointing to librte_crypto_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so 00:03:59.611 Installing symlink pointing to librte_compress_isal.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so.23 00:03:59.611 Installing symlink pointing to librte_compress_isal.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so 00:03:59.611 Installing symlink pointing to librte_compress_mlx5.so.23.0 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so.23 00:03:59.611 Installing symlink pointing to librte_compress_mlx5.so.23 to /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so 00:03:59.611 Running custom install script '/bin/sh /var/jenkins/workspace/crypto-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:59.611 10:14:12 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:59.611 10:14:12 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:59.611 00:03:59.611 real 2m22.503s 00:03:59.611 user 16m45.119s 00:03:59.611 sys 3m31.343s 00:03:59.611 10:14:12 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:59.611 10:14:12 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:59.611 ************************************ 00:03:59.611 END TEST build_native_dpdk 00:03:59.611 ************************************ 00:03:59.611 10:14:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:59.611 10:14:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:59.611 10:14:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build --with-shared 00:03:59.871 Using /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:04:00.130 DPDK libraries: /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:04:00.130 DPDK includes: //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:04:00.130 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:04:00.389 Using 'verbs' RDMA provider 00:04:16.723 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:31.615 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:31.615 Creating mk/config.mk...done. 00:04:31.615 Creating mk/cc.flags.mk...done. 00:04:31.615 Type 'make' to build. 00:04:31.615 10:14:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:04:31.615 10:14:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:31.615 10:14:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:31.616 10:14:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:31.616 ************************************ 00:04:31.616 START TEST make 00:04:31.616 ************************************ 00:04:31.616 10:14:43 make -- common/autotest_common.sh@1125 -- $ make -j112 00:04:31.616 make[1]: Nothing to be done for 'all'. 00:04:46.508 CC lib/ut_mock/mock.o 00:04:46.508 CC lib/log/log_flags.o 00:04:46.508 CC lib/log/log.o 00:04:46.508 CC lib/log/log_deprecated.o 00:04:46.508 CC lib/ut/ut.o 00:04:46.508 LIB libspdk_ut_mock.a 00:04:46.508 LIB libspdk_ut.a 00:04:46.508 LIB libspdk_log.a 00:04:46.508 SO libspdk_ut_mock.so.6.0 00:04:46.508 SO libspdk_ut.so.2.0 00:04:46.508 SO libspdk_log.so.7.0 00:04:46.508 SYMLINK libspdk_ut_mock.so 00:04:46.508 SYMLINK libspdk_ut.so 00:04:46.508 SYMLINK libspdk_log.so 00:04:46.508 CXX lib/trace_parser/trace.o 00:04:46.508 CC lib/util/base64.o 00:04:46.508 CC lib/util/bit_array.o 00:04:46.508 CC lib/dma/dma.o 00:04:46.508 CC lib/util/cpuset.o 00:04:46.508 CC lib/util/crc16.o 00:04:46.508 CC lib/util/crc32_ieee.o 00:04:46.508 CC lib/util/crc32.o 00:04:46.508 CC lib/util/crc32c.o 00:04:46.508 CC lib/util/fd.o 00:04:46.508 CC lib/util/crc64.o 00:04:46.508 CC lib/util/dif.o 00:04:46.508 CC lib/ioat/ioat.o 00:04:46.508 CC lib/util/fd_group.o 00:04:46.508 CC lib/util/file.o 00:04:46.508 CC lib/util/hexlify.o 00:04:46.508 CC lib/util/iov.o 00:04:46.508 CC lib/util/math.o 00:04:46.508 CC lib/util/net.o 00:04:46.508 CC lib/util/pipe.o 00:04:46.508 CC lib/util/strerror_tls.o 00:04:46.508 CC lib/util/string.o 00:04:46.508 CC lib/util/uuid.o 00:04:46.508 CC lib/util/xor.o 00:04:46.508 CC lib/util/zipf.o 00:04:46.508 CC lib/vfio_user/host/vfio_user.o 00:04:46.508 CC lib/vfio_user/host/vfio_user_pci.o 00:04:46.508 LIB libspdk_dma.a 00:04:46.508 SO libspdk_dma.so.4.0 00:04:46.508 LIB libspdk_ioat.a 00:04:46.508 SO libspdk_ioat.so.7.0 00:04:46.508 SYMLINK libspdk_dma.so 00:04:46.508 SYMLINK libspdk_ioat.so 00:04:46.508 LIB libspdk_vfio_user.a 00:04:46.508 SO libspdk_vfio_user.so.5.0 00:04:46.766 LIB libspdk_util.a 00:04:46.766 SYMLINK libspdk_vfio_user.so 00:04:46.766 SO libspdk_util.so.10.0 00:04:47.024 SYMLINK libspdk_util.so 00:04:47.283 CC lib/env_dpdk/env.o 00:04:47.283 CC lib/env_dpdk/memory.o 00:04:47.283 CC lib/env_dpdk/pci.o 00:04:47.283 CC lib/reduce/reduce.o 00:04:47.283 CC lib/env_dpdk/init.o 00:04:47.283 CC lib/env_dpdk/pci_ioat.o 00:04:47.283 CC lib/env_dpdk/threads.o 00:04:47.283 CC lib/env_dpdk/pci_virtio.o 00:04:47.283 CC lib/env_dpdk/pci_vmd.o 00:04:47.283 CC lib/env_dpdk/pci_idxd.o 00:04:47.283 CC lib/env_dpdk/pci_event.o 00:04:47.283 CC lib/env_dpdk/sigbus_handler.o 00:04:47.283 CC lib/rdma_provider/common.o 00:04:47.283 CC lib/env_dpdk/pci_dpdk.o 00:04:47.283 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:47.283 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:47.283 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:47.283 CC lib/vmd/vmd.o 00:04:47.283 CC lib/vmd/led.o 00:04:47.283 CC lib/conf/conf.o 00:04:47.283 CC lib/json/json_parse.o 00:04:47.283 CC lib/json/json_util.o 00:04:47.283 CC lib/json/json_write.o 00:04:47.283 CC lib/idxd/idxd.o 00:04:47.283 CC lib/idxd/idxd_user.o 00:04:47.283 CC lib/idxd/idxd_kernel.o 00:04:47.283 CC lib/rdma_utils/rdma_utils.o 00:04:47.542 LIB libspdk_rdma_provider.a 00:04:47.542 LIB libspdk_conf.a 00:04:47.542 SO libspdk_rdma_provider.so.6.0 00:04:47.542 SO libspdk_conf.so.6.0 00:04:47.542 LIB libspdk_rdma_utils.a 00:04:47.542 LIB libspdk_json.a 00:04:47.542 SYMLINK libspdk_rdma_provider.so 00:04:47.542 SO libspdk_rdma_utils.so.1.0 00:04:47.542 SYMLINK libspdk_conf.so 00:04:47.800 SO libspdk_json.so.6.0 00:04:47.800 SYMLINK libspdk_rdma_utils.so 00:04:47.800 SYMLINK libspdk_json.so 00:04:47.800 LIB libspdk_idxd.a 00:04:47.800 SO libspdk_idxd.so.12.0 00:04:47.800 LIB libspdk_trace_parser.a 00:04:48.058 LIB libspdk_reduce.a 00:04:48.058 LIB libspdk_vmd.a 00:04:48.058 SO libspdk_trace_parser.so.5.0 00:04:48.058 SO libspdk_reduce.so.6.1 00:04:48.058 SYMLINK libspdk_idxd.so 00:04:48.058 SO libspdk_vmd.so.6.0 00:04:48.058 SYMLINK libspdk_reduce.so 00:04:48.058 SYMLINK libspdk_vmd.so 00:04:48.058 SYMLINK libspdk_trace_parser.so 00:04:48.058 CC lib/jsonrpc/jsonrpc_server.o 00:04:48.058 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:48.058 CC lib/jsonrpc/jsonrpc_client.o 00:04:48.058 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:48.624 LIB libspdk_env_dpdk.a 00:04:48.624 LIB libspdk_jsonrpc.a 00:04:48.624 SO libspdk_jsonrpc.so.6.0 00:04:48.624 SO libspdk_env_dpdk.so.15.0 00:04:48.624 SYMLINK libspdk_jsonrpc.so 00:04:48.881 SYMLINK libspdk_env_dpdk.so 00:04:49.160 CC lib/rpc/rpc.o 00:04:49.417 LIB libspdk_rpc.a 00:04:49.417 SO libspdk_rpc.so.6.0 00:04:49.417 SYMLINK libspdk_rpc.so 00:04:49.674 CC lib/trace/trace_rpc.o 00:04:49.674 CC lib/trace/trace.o 00:04:49.674 CC lib/trace/trace_flags.o 00:04:49.674 CC lib/notify/notify.o 00:04:49.674 CC lib/notify/notify_rpc.o 00:04:49.674 CC lib/keyring/keyring.o 00:04:49.674 CC lib/keyring/keyring_rpc.o 00:04:49.932 LIB libspdk_notify.a 00:04:49.932 SO libspdk_notify.so.6.0 00:04:49.932 LIB libspdk_keyring.a 00:04:49.932 LIB libspdk_trace.a 00:04:50.190 SYMLINK libspdk_notify.so 00:04:50.190 SO libspdk_keyring.so.1.0 00:04:50.190 SO libspdk_trace.so.10.0 00:04:50.190 SYMLINK libspdk_keyring.so 00:04:50.190 SYMLINK libspdk_trace.so 00:04:50.448 CC lib/sock/sock.o 00:04:50.448 CC lib/sock/sock_rpc.o 00:04:50.448 CC lib/thread/thread.o 00:04:50.448 CC lib/thread/iobuf.o 00:04:51.014 LIB libspdk_sock.a 00:04:51.014 SO libspdk_sock.so.10.0 00:04:51.014 SYMLINK libspdk_sock.so 00:04:51.272 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:51.528 CC lib/nvme/nvme_ctrlr.o 00:04:51.528 CC lib/nvme/nvme_fabric.o 00:04:51.528 CC lib/nvme/nvme_ns_cmd.o 00:04:51.528 CC lib/nvme/nvme_ns.o 00:04:51.528 CC lib/nvme/nvme_pcie_common.o 00:04:51.528 CC lib/nvme/nvme_pcie.o 00:04:51.528 CC lib/nvme/nvme_qpair.o 00:04:51.528 CC lib/nvme/nvme.o 00:04:51.528 CC lib/nvme/nvme_quirks.o 00:04:51.528 CC lib/nvme/nvme_transport.o 00:04:51.528 CC lib/nvme/nvme_discovery.o 00:04:51.528 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:51.528 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:51.528 CC lib/nvme/nvme_tcp.o 00:04:51.528 CC lib/nvme/nvme_opal.o 00:04:51.528 CC lib/nvme/nvme_io_msg.o 00:04:51.528 CC lib/nvme/nvme_poll_group.o 00:04:51.528 CC lib/nvme/nvme_zns.o 00:04:51.528 CC lib/nvme/nvme_stubs.o 00:04:51.528 CC lib/nvme/nvme_auth.o 00:04:51.528 CC lib/nvme/nvme_cuse.o 00:04:51.528 CC lib/nvme/nvme_rdma.o 00:04:52.108 LIB libspdk_thread.a 00:04:52.108 SO libspdk_thread.so.10.1 00:04:52.108 SYMLINK libspdk_thread.so 00:04:52.365 CC lib/init/json_config.o 00:04:52.365 CC lib/init/subsystem.o 00:04:52.365 CC lib/init/subsystem_rpc.o 00:04:52.365 CC lib/init/rpc.o 00:04:52.365 CC lib/accel/accel.o 00:04:52.365 CC lib/accel/accel_rpc.o 00:04:52.365 CC lib/accel/accel_sw.o 00:04:52.365 CC lib/virtio/virtio.o 00:04:52.365 CC lib/virtio/virtio_vhost_user.o 00:04:52.365 CC lib/virtio/virtio_vfio_user.o 00:04:52.365 CC lib/virtio/virtio_pci.o 00:04:52.365 CC lib/blob/request.o 00:04:52.365 CC lib/blob/blobstore.o 00:04:52.365 CC lib/blob/blob_bs_dev.o 00:04:52.365 CC lib/blob/zeroes.o 00:04:52.622 LIB libspdk_init.a 00:04:52.879 SO libspdk_init.so.5.0 00:04:52.879 LIB libspdk_virtio.a 00:04:52.879 SO libspdk_virtio.so.7.0 00:04:52.879 SYMLINK libspdk_init.so 00:04:52.879 SYMLINK libspdk_virtio.so 00:04:53.137 CC lib/event/app.o 00:04:53.137 CC lib/event/reactor.o 00:04:53.137 CC lib/event/log_rpc.o 00:04:53.137 CC lib/event/app_rpc.o 00:04:53.137 CC lib/event/scheduler_static.o 00:04:53.395 LIB libspdk_accel.a 00:04:53.395 SO libspdk_accel.so.16.0 00:04:53.395 LIB libspdk_nvme.a 00:04:53.652 SYMLINK libspdk_accel.so 00:04:53.652 SO libspdk_nvme.so.13.1 00:04:53.652 LIB libspdk_event.a 00:04:53.910 SO libspdk_event.so.14.0 00:04:53.910 CC lib/bdev/bdev.o 00:04:53.910 CC lib/bdev/bdev_rpc.o 00:04:53.910 CC lib/bdev/bdev_zone.o 00:04:53.910 CC lib/bdev/part.o 00:04:53.910 CC lib/bdev/scsi_nvme.o 00:04:53.910 SYMLINK libspdk_event.so 00:04:53.910 SYMLINK libspdk_nvme.so 00:04:55.285 LIB libspdk_blob.a 00:04:55.285 SO libspdk_blob.so.11.0 00:04:55.285 SYMLINK libspdk_blob.so 00:04:55.850 CC lib/blobfs/blobfs.o 00:04:55.850 CC lib/blobfs/tree.o 00:04:55.850 CC lib/lvol/lvol.o 00:04:56.415 LIB libspdk_bdev.a 00:04:56.415 SO libspdk_bdev.so.16.0 00:04:56.415 LIB libspdk_blobfs.a 00:04:56.415 SYMLINK libspdk_bdev.so 00:04:56.672 SO libspdk_blobfs.so.10.0 00:04:56.672 LIB libspdk_lvol.a 00:04:56.672 SYMLINK libspdk_blobfs.so 00:04:56.672 SO libspdk_lvol.so.10.0 00:04:56.672 SYMLINK libspdk_lvol.so 00:04:56.932 CC lib/scsi/port.o 00:04:56.932 CC lib/scsi/dev.o 00:04:56.932 CC lib/ublk/ublk.o 00:04:56.932 CC lib/scsi/lun.o 00:04:56.932 CC lib/ublk/ublk_rpc.o 00:04:56.932 CC lib/nvmf/ctrlr.o 00:04:56.932 CC lib/scsi/scsi.o 00:04:56.932 CC lib/scsi/scsi_bdev.o 00:04:56.932 CC lib/nbd/nbd.o 00:04:56.932 CC lib/scsi/scsi_pr.o 00:04:56.932 CC lib/nvmf/ctrlr_discovery.o 00:04:56.932 CC lib/nbd/nbd_rpc.o 00:04:56.932 CC lib/scsi/scsi_rpc.o 00:04:56.932 CC lib/ftl/ftl_core.o 00:04:56.932 CC lib/nvmf/ctrlr_bdev.o 00:04:56.932 CC lib/scsi/task.o 00:04:56.932 CC lib/ftl/ftl_init.o 00:04:56.932 CC lib/nvmf/subsystem.o 00:04:56.932 CC lib/ftl/ftl_layout.o 00:04:56.932 CC lib/nvmf/nvmf.o 00:04:56.932 CC lib/ftl/ftl_debug.o 00:04:56.933 CC lib/nvmf/nvmf_rpc.o 00:04:56.933 CC lib/ftl/ftl_io.o 00:04:56.933 CC lib/nvmf/transport.o 00:04:56.933 CC lib/ftl/ftl_sb.o 00:04:56.933 CC lib/nvmf/tcp.o 00:04:56.933 CC lib/ftl/ftl_l2p.o 00:04:56.933 CC lib/nvmf/stubs.o 00:04:56.933 CC lib/ftl/ftl_l2p_flat.o 00:04:56.933 CC lib/nvmf/mdns_server.o 00:04:56.933 CC lib/ftl/ftl_nv_cache.o 00:04:56.933 CC lib/ftl/ftl_band.o 00:04:56.933 CC lib/nvmf/rdma.o 00:04:56.933 CC lib/ftl/ftl_rq.o 00:04:56.933 CC lib/ftl/ftl_band_ops.o 00:04:56.933 CC lib/ftl/ftl_writer.o 00:04:56.933 CC lib/nvmf/auth.o 00:04:56.933 CC lib/ftl/ftl_reloc.o 00:04:56.933 CC lib/ftl/ftl_l2p_cache.o 00:04:56.933 CC lib/ftl/ftl_p2l.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:56.933 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:56.933 CC lib/ftl/utils/ftl_conf.o 00:04:56.933 CC lib/ftl/utils/ftl_md.o 00:04:56.933 CC lib/ftl/utils/ftl_mempool.o 00:04:56.933 CC lib/ftl/utils/ftl_property.o 00:04:56.933 CC lib/ftl/utils/ftl_bitmap.o 00:04:56.933 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:56.933 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:56.933 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:56.933 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:56.933 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:56.933 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:56.933 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:56.933 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:56.933 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:56.933 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:56.933 CC lib/ftl/base/ftl_base_dev.o 00:04:56.933 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:56.933 CC lib/ftl/base/ftl_base_bdev.o 00:04:56.933 CC lib/ftl/ftl_trace.o 00:04:57.499 LIB libspdk_scsi.a 00:04:57.499 LIB libspdk_nbd.a 00:04:57.757 SO libspdk_nbd.so.7.0 00:04:57.757 SO libspdk_scsi.so.9.0 00:04:57.757 SYMLINK libspdk_nbd.so 00:04:57.757 SYMLINK libspdk_scsi.so 00:04:57.757 LIB libspdk_ublk.a 00:04:58.015 SO libspdk_ublk.so.3.0 00:04:58.015 SYMLINK libspdk_ublk.so 00:04:58.015 CC lib/iscsi/conn.o 00:04:58.015 CC lib/iscsi/init_grp.o 00:04:58.015 CC lib/vhost/vhost.o 00:04:58.015 CC lib/iscsi/iscsi.o 00:04:58.015 CC lib/vhost/vhost_rpc.o 00:04:58.015 CC lib/iscsi/md5.o 00:04:58.015 CC lib/vhost/vhost_scsi.o 00:04:58.015 CC lib/iscsi/param.o 00:04:58.015 CC lib/vhost/vhost_blk.o 00:04:58.015 CC lib/iscsi/portal_grp.o 00:04:58.015 CC lib/vhost/rte_vhost_user.o 00:04:58.015 CC lib/iscsi/tgt_node.o 00:04:58.015 CC lib/iscsi/iscsi_subsystem.o 00:04:58.015 CC lib/iscsi/task.o 00:04:58.015 CC lib/iscsi/iscsi_rpc.o 00:04:58.274 LIB libspdk_ftl.a 00:04:58.274 SO libspdk_ftl.so.9.0 00:04:58.840 SYMLINK libspdk_ftl.so 00:04:59.098 LIB libspdk_nvmf.a 00:04:59.098 SO libspdk_nvmf.so.19.0 00:04:59.098 LIB libspdk_vhost.a 00:04:59.098 SO libspdk_vhost.so.8.0 00:04:59.356 SYMLINK libspdk_vhost.so 00:04:59.356 SYMLINK libspdk_nvmf.so 00:04:59.356 LIB libspdk_iscsi.a 00:04:59.614 SO libspdk_iscsi.so.8.0 00:04:59.614 SYMLINK libspdk_iscsi.so 00:05:00.181 CC module/env_dpdk/env_dpdk_rpc.o 00:05:00.439 CC module/keyring/file/keyring.o 00:05:00.439 CC module/keyring/file/keyring_rpc.o 00:05:00.439 CC module/accel/ioat/accel_ioat.o 00:05:00.439 CC module/accel/ioat/accel_ioat_rpc.o 00:05:00.439 CC module/accel/error/accel_error.o 00:05:00.439 CC module/accel/error/accel_error_rpc.o 00:05:00.439 LIB libspdk_env_dpdk_rpc.a 00:05:00.439 CC module/scheduler/gscheduler/gscheduler.o 00:05:00.439 CC module/sock/posix/posix.o 00:05:00.439 CC module/accel/dsa/accel_dsa.o 00:05:00.439 CC module/accel/dsa/accel_dsa_rpc.o 00:05:00.439 CC module/keyring/linux/keyring.o 00:05:00.439 CC module/keyring/linux/keyring_rpc.o 00:05:00.439 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:05:00.439 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:05:00.439 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:00.439 CC module/blob/bdev/blob_bdev.o 00:05:00.439 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:00.439 CC module/accel/iaa/accel_iaa.o 00:05:00.439 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:05:00.439 CC module/accel/iaa/accel_iaa_rpc.o 00:05:00.439 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:05:00.439 SO libspdk_env_dpdk_rpc.so.6.0 00:05:00.439 SYMLINK libspdk_env_dpdk_rpc.so 00:05:00.698 LIB libspdk_keyring_file.a 00:05:00.698 LIB libspdk_scheduler_gscheduler.a 00:05:00.698 LIB libspdk_keyring_linux.a 00:05:00.698 SO libspdk_keyring_file.so.1.0 00:05:00.698 LIB libspdk_accel_error.a 00:05:00.698 LIB libspdk_accel_ioat.a 00:05:00.698 LIB libspdk_scheduler_dpdk_governor.a 00:05:00.698 SO libspdk_keyring_linux.so.1.0 00:05:00.698 SO libspdk_scheduler_gscheduler.so.4.0 00:05:00.698 SO libspdk_accel_error.so.2.0 00:05:00.698 SO libspdk_accel_ioat.so.6.0 00:05:00.698 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:00.698 LIB libspdk_accel_iaa.a 00:05:00.698 LIB libspdk_scheduler_dynamic.a 00:05:00.698 SYMLINK libspdk_keyring_file.so 00:05:00.698 LIB libspdk_accel_dsa.a 00:05:00.698 SYMLINK libspdk_keyring_linux.so 00:05:00.698 SYMLINK libspdk_scheduler_gscheduler.so 00:05:00.698 SO libspdk_scheduler_dynamic.so.4.0 00:05:00.698 SO libspdk_accel_iaa.so.3.0 00:05:00.698 SYMLINK libspdk_accel_error.so 00:05:00.698 LIB libspdk_blob_bdev.a 00:05:00.698 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:00.698 SO libspdk_accel_dsa.so.5.0 00:05:00.698 SYMLINK libspdk_accel_ioat.so 00:05:00.698 SO libspdk_blob_bdev.so.11.0 00:05:00.698 SYMLINK libspdk_scheduler_dynamic.so 00:05:00.698 SYMLINK libspdk_accel_iaa.so 00:05:00.955 SYMLINK libspdk_accel_dsa.so 00:05:00.955 SYMLINK libspdk_blob_bdev.so 00:05:01.213 LIB libspdk_sock_posix.a 00:05:01.213 SO libspdk_sock_posix.so.6.0 00:05:01.213 SYMLINK libspdk_sock_posix.so 00:05:01.472 CC module/bdev/lvol/vbdev_lvol.o 00:05:01.472 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:01.472 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:01.472 CC module/bdev/delay/vbdev_delay.o 00:05:01.472 CC module/bdev/gpt/gpt.o 00:05:01.472 CC module/bdev/error/vbdev_error.o 00:05:01.472 CC module/bdev/gpt/vbdev_gpt.o 00:05:01.472 CC module/bdev/error/vbdev_error_rpc.o 00:05:01.472 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:01.472 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:01.472 CC module/bdev/aio/bdev_aio.o 00:05:01.472 CC module/bdev/aio/bdev_aio_rpc.o 00:05:01.472 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:01.472 CC module/bdev/iscsi/bdev_iscsi.o 00:05:01.472 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:01.472 CC module/bdev/raid/bdev_raid.o 00:05:01.472 CC module/blobfs/bdev/blobfs_bdev.o 00:05:01.472 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:01.472 CC module/bdev/raid/bdev_raid_rpc.o 00:05:01.472 CC module/bdev/null/bdev_null.o 00:05:01.472 CC module/bdev/raid/bdev_raid_sb.o 00:05:01.472 CC module/bdev/null/bdev_null_rpc.o 00:05:01.472 CC module/bdev/split/vbdev_split.o 00:05:01.472 CC module/bdev/raid/raid0.o 00:05:01.472 CC module/bdev/ftl/bdev_ftl.o 00:05:01.472 CC module/bdev/split/vbdev_split_rpc.o 00:05:01.472 CC module/bdev/raid/raid1.o 00:05:01.472 CC module/bdev/raid/concat.o 00:05:01.472 CC module/bdev/malloc/bdev_malloc.o 00:05:01.472 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:01.472 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:01.472 CC module/bdev/compress/vbdev_compress_rpc.o 00:05:01.472 CC module/bdev/compress/vbdev_compress.o 00:05:01.472 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:01.472 CC module/bdev/nvme/bdev_nvme.o 00:05:01.472 CC module/bdev/crypto/vbdev_crypto.o 00:05:01.472 CC module/bdev/nvme/nvme_rpc.o 00:05:01.472 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:05:01.472 CC module/bdev/passthru/vbdev_passthru.o 00:05:01.472 CC module/bdev/nvme/vbdev_opal.o 00:05:01.472 CC module/bdev/nvme/bdev_mdns_client.o 00:05:01.472 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:01.472 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:01.472 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:01.472 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:01.472 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:01.472 LIB libspdk_accel_dpdk_compressdev.a 00:05:01.472 SO libspdk_accel_dpdk_compressdev.so.3.0 00:05:01.731 SYMLINK libspdk_accel_dpdk_compressdev.so 00:05:01.731 LIB libspdk_blobfs_bdev.a 00:05:01.731 SO libspdk_blobfs_bdev.so.6.0 00:05:01.731 LIB libspdk_accel_dpdk_cryptodev.a 00:05:01.731 LIB libspdk_bdev_gpt.a 00:05:01.731 LIB libspdk_bdev_null.a 00:05:01.731 LIB libspdk_bdev_split.a 00:05:01.731 LIB libspdk_bdev_error.a 00:05:01.731 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:05:01.731 SO libspdk_bdev_gpt.so.6.0 00:05:01.731 SO libspdk_bdev_null.so.6.0 00:05:01.731 LIB libspdk_bdev_ftl.a 00:05:01.731 SYMLINK libspdk_blobfs_bdev.so 00:05:01.731 SO libspdk_bdev_split.so.6.0 00:05:01.731 LIB libspdk_bdev_aio.a 00:05:01.731 SO libspdk_bdev_error.so.6.0 00:05:01.731 LIB libspdk_bdev_passthru.a 00:05:01.989 SO libspdk_bdev_ftl.so.6.0 00:05:01.989 SO libspdk_bdev_aio.so.6.0 00:05:01.989 LIB libspdk_bdev_iscsi.a 00:05:01.989 LIB libspdk_bdev_zone_block.a 00:05:01.989 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:05:01.989 SO libspdk_bdev_passthru.so.6.0 00:05:01.989 SYMLINK libspdk_bdev_gpt.so 00:05:01.989 SYMLINK libspdk_bdev_null.so 00:05:01.989 SYMLINK libspdk_bdev_split.so 00:05:01.989 LIB libspdk_bdev_delay.a 00:05:01.989 SYMLINK libspdk_bdev_error.so 00:05:01.989 LIB libspdk_bdev_malloc.a 00:05:01.989 SO libspdk_bdev_iscsi.so.6.0 00:05:01.989 LIB libspdk_bdev_compress.a 00:05:01.989 SO libspdk_bdev_zone_block.so.6.0 00:05:01.989 SYMLINK libspdk_bdev_ftl.so 00:05:01.989 SYMLINK libspdk_bdev_aio.so 00:05:01.989 SO libspdk_bdev_delay.so.6.0 00:05:01.989 SO libspdk_bdev_malloc.so.6.0 00:05:01.989 SO libspdk_bdev_compress.so.6.0 00:05:01.989 SYMLINK libspdk_bdev_passthru.so 00:05:01.989 SYMLINK libspdk_bdev_iscsi.so 00:05:01.989 SYMLINK libspdk_bdev_zone_block.so 00:05:01.989 SYMLINK libspdk_bdev_delay.so 00:05:01.989 LIB libspdk_bdev_lvol.a 00:05:01.989 LIB libspdk_bdev_virtio.a 00:05:01.989 SYMLINK libspdk_bdev_malloc.so 00:05:01.989 SYMLINK libspdk_bdev_compress.so 00:05:01.989 SO libspdk_bdev_lvol.so.6.0 00:05:01.989 SO libspdk_bdev_virtio.so.6.0 00:05:02.248 SYMLINK libspdk_bdev_lvol.so 00:05:02.248 SYMLINK libspdk_bdev_virtio.so 00:05:02.248 LIB libspdk_bdev_crypto.a 00:05:02.248 SO libspdk_bdev_crypto.so.6.0 00:05:02.248 SYMLINK libspdk_bdev_crypto.so 00:05:02.507 LIB libspdk_bdev_raid.a 00:05:02.507 SO libspdk_bdev_raid.so.6.0 00:05:02.766 SYMLINK libspdk_bdev_raid.so 00:05:03.703 LIB libspdk_bdev_nvme.a 00:05:03.703 SO libspdk_bdev_nvme.so.7.0 00:05:03.703 SYMLINK libspdk_bdev_nvme.so 00:05:04.675 CC module/event/subsystems/vmd/vmd.o 00:05:04.675 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:04.675 CC module/event/subsystems/keyring/keyring.o 00:05:04.675 CC module/event/subsystems/sock/sock.o 00:05:04.675 CC module/event/subsystems/iobuf/iobuf.o 00:05:04.675 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:04.675 CC module/event/subsystems/scheduler/scheduler.o 00:05:04.675 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:04.675 LIB libspdk_event_vmd.a 00:05:04.675 LIB libspdk_event_scheduler.a 00:05:04.675 LIB libspdk_event_keyring.a 00:05:04.675 LIB libspdk_event_iobuf.a 00:05:04.675 LIB libspdk_event_vhost_blk.a 00:05:04.675 LIB libspdk_event_sock.a 00:05:04.675 SO libspdk_event_vmd.so.6.0 00:05:04.675 SO libspdk_event_scheduler.so.4.0 00:05:04.942 SO libspdk_event_keyring.so.1.0 00:05:04.942 SO libspdk_event_vhost_blk.so.3.0 00:05:04.942 SO libspdk_event_sock.so.5.0 00:05:04.942 SO libspdk_event_iobuf.so.3.0 00:05:04.942 SYMLINK libspdk_event_vmd.so 00:05:04.942 SYMLINK libspdk_event_scheduler.so 00:05:04.942 SYMLINK libspdk_event_keyring.so 00:05:04.942 SYMLINK libspdk_event_vhost_blk.so 00:05:04.943 SYMLINK libspdk_event_sock.so 00:05:04.943 SYMLINK libspdk_event_iobuf.so 00:05:05.201 CC module/event/subsystems/accel/accel.o 00:05:05.460 LIB libspdk_event_accel.a 00:05:05.460 SO libspdk_event_accel.so.6.0 00:05:05.460 SYMLINK libspdk_event_accel.so 00:05:05.718 CC module/event/subsystems/bdev/bdev.o 00:05:05.977 LIB libspdk_event_bdev.a 00:05:05.977 SO libspdk_event_bdev.so.6.0 00:05:06.235 SYMLINK libspdk_event_bdev.so 00:05:06.493 CC module/event/subsystems/nbd/nbd.o 00:05:06.493 CC module/event/subsystems/scsi/scsi.o 00:05:06.493 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:06.493 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:06.493 CC module/event/subsystems/ublk/ublk.o 00:05:06.752 LIB libspdk_event_nbd.a 00:05:06.752 LIB libspdk_event_scsi.a 00:05:06.752 LIB libspdk_event_ublk.a 00:05:06.752 SO libspdk_event_nbd.so.6.0 00:05:06.752 SO libspdk_event_scsi.so.6.0 00:05:06.752 SO libspdk_event_ublk.so.3.0 00:05:06.752 LIB libspdk_event_nvmf.a 00:05:06.752 SYMLINK libspdk_event_nbd.so 00:05:06.752 SYMLINK libspdk_event_scsi.so 00:05:06.752 SYMLINK libspdk_event_ublk.so 00:05:06.752 SO libspdk_event_nvmf.so.6.0 00:05:07.010 SYMLINK libspdk_event_nvmf.so 00:05:07.269 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:07.269 CC module/event/subsystems/iscsi/iscsi.o 00:05:07.269 LIB libspdk_event_vhost_scsi.a 00:05:07.269 LIB libspdk_event_iscsi.a 00:05:07.527 SO libspdk_event_vhost_scsi.so.3.0 00:05:07.527 SO libspdk_event_iscsi.so.6.0 00:05:07.527 SYMLINK libspdk_event_vhost_scsi.so 00:05:07.527 SYMLINK libspdk_event_iscsi.so 00:05:07.785 SO libspdk.so.6.0 00:05:07.785 SYMLINK libspdk.so 00:05:08.043 CC test/rpc_client/rpc_client_test.o 00:05:08.043 CC app/spdk_lspci/spdk_lspci.o 00:05:08.043 TEST_HEADER include/spdk/accel_module.h 00:05:08.043 TEST_HEADER include/spdk/accel.h 00:05:08.043 TEST_HEADER include/spdk/barrier.h 00:05:08.043 TEST_HEADER include/spdk/assert.h 00:05:08.043 TEST_HEADER include/spdk/bdev_module.h 00:05:08.043 TEST_HEADER include/spdk/base64.h 00:05:08.043 TEST_HEADER include/spdk/bdev_zone.h 00:05:08.043 TEST_HEADER include/spdk/bdev.h 00:05:08.043 TEST_HEADER include/spdk/bit_array.h 00:05:08.043 TEST_HEADER include/spdk/bit_pool.h 00:05:08.043 CC app/spdk_top/spdk_top.o 00:05:08.043 TEST_HEADER include/spdk/blob_bdev.h 00:05:08.043 TEST_HEADER include/spdk/blobfs.h 00:05:08.043 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:08.043 TEST_HEADER include/spdk/blob.h 00:05:08.043 CC app/spdk_nvme_discover/discovery_aer.o 00:05:08.043 TEST_HEADER include/spdk/conf.h 00:05:08.043 TEST_HEADER include/spdk/config.h 00:05:08.043 TEST_HEADER include/spdk/cpuset.h 00:05:08.043 CC app/spdk_nvme_identify/identify.o 00:05:08.043 TEST_HEADER include/spdk/crc16.h 00:05:08.043 TEST_HEADER include/spdk/crc32.h 00:05:08.043 TEST_HEADER include/spdk/dif.h 00:05:08.043 CXX app/trace/trace.o 00:05:08.043 CC app/trace_record/trace_record.o 00:05:08.043 TEST_HEADER include/spdk/crc64.h 00:05:08.043 TEST_HEADER include/spdk/dma.h 00:05:08.043 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:08.043 TEST_HEADER include/spdk/endian.h 00:05:08.043 TEST_HEADER include/spdk/env.h 00:05:08.043 TEST_HEADER include/spdk/env_dpdk.h 00:05:08.043 TEST_HEADER include/spdk/event.h 00:05:08.043 TEST_HEADER include/spdk/fd_group.h 00:05:08.043 TEST_HEADER include/spdk/fd.h 00:05:08.043 TEST_HEADER include/spdk/gpt_spec.h 00:05:08.043 TEST_HEADER include/spdk/ftl.h 00:05:08.043 TEST_HEADER include/spdk/file.h 00:05:08.043 TEST_HEADER include/spdk/hexlify.h 00:05:08.043 TEST_HEADER include/spdk/histogram_data.h 00:05:08.043 CC app/spdk_nvme_perf/perf.o 00:05:08.043 TEST_HEADER include/spdk/idxd_spec.h 00:05:08.043 TEST_HEADER include/spdk/idxd.h 00:05:08.043 TEST_HEADER include/spdk/ioat.h 00:05:08.043 TEST_HEADER include/spdk/init.h 00:05:08.043 TEST_HEADER include/spdk/ioat_spec.h 00:05:08.043 TEST_HEADER include/spdk/iscsi_spec.h 00:05:08.043 TEST_HEADER include/spdk/json.h 00:05:08.043 TEST_HEADER include/spdk/jsonrpc.h 00:05:08.043 TEST_HEADER include/spdk/keyring.h 00:05:08.043 TEST_HEADER include/spdk/likely.h 00:05:08.043 TEST_HEADER include/spdk/keyring_module.h 00:05:08.043 TEST_HEADER include/spdk/lvol.h 00:05:08.043 TEST_HEADER include/spdk/log.h 00:05:08.043 CC app/spdk_dd/spdk_dd.o 00:05:08.043 TEST_HEADER include/spdk/nbd.h 00:05:08.043 TEST_HEADER include/spdk/memory.h 00:05:08.043 TEST_HEADER include/spdk/mmio.h 00:05:08.043 TEST_HEADER include/spdk/net.h 00:05:08.043 CC app/nvmf_tgt/nvmf_main.o 00:05:08.043 TEST_HEADER include/spdk/notify.h 00:05:08.043 TEST_HEADER include/spdk/nvme_intel.h 00:05:08.043 TEST_HEADER include/spdk/nvme.h 00:05:08.043 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:08.043 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:08.043 TEST_HEADER include/spdk/nvme_spec.h 00:05:08.043 TEST_HEADER include/spdk/nvme_zns.h 00:05:08.043 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:08.043 TEST_HEADER include/spdk/nvmf.h 00:05:08.043 TEST_HEADER include/spdk/nvmf_spec.h 00:05:08.043 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:08.043 TEST_HEADER include/spdk/nvmf_transport.h 00:05:08.310 TEST_HEADER include/spdk/opal.h 00:05:08.310 TEST_HEADER include/spdk/pci_ids.h 00:05:08.310 TEST_HEADER include/spdk/opal_spec.h 00:05:08.310 TEST_HEADER include/spdk/pipe.h 00:05:08.310 TEST_HEADER include/spdk/reduce.h 00:05:08.310 TEST_HEADER include/spdk/queue.h 00:05:08.310 TEST_HEADER include/spdk/rpc.h 00:05:08.310 TEST_HEADER include/spdk/scheduler.h 00:05:08.310 TEST_HEADER include/spdk/scsi.h 00:05:08.310 TEST_HEADER include/spdk/scsi_spec.h 00:05:08.310 TEST_HEADER include/spdk/sock.h 00:05:08.310 TEST_HEADER include/spdk/stdinc.h 00:05:08.310 TEST_HEADER include/spdk/string.h 00:05:08.310 CC app/iscsi_tgt/iscsi_tgt.o 00:05:08.310 TEST_HEADER include/spdk/thread.h 00:05:08.310 TEST_HEADER include/spdk/trace.h 00:05:08.310 TEST_HEADER include/spdk/trace_parser.h 00:05:08.310 TEST_HEADER include/spdk/tree.h 00:05:08.310 TEST_HEADER include/spdk/ublk.h 00:05:08.310 TEST_HEADER include/spdk/util.h 00:05:08.310 TEST_HEADER include/spdk/uuid.h 00:05:08.310 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:08.310 TEST_HEADER include/spdk/version.h 00:05:08.310 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:08.310 TEST_HEADER include/spdk/vmd.h 00:05:08.310 TEST_HEADER include/spdk/vhost.h 00:05:08.310 TEST_HEADER include/spdk/xor.h 00:05:08.310 TEST_HEADER include/spdk/zipf.h 00:05:08.310 CXX test/cpp_headers/accel.o 00:05:08.310 CC app/spdk_tgt/spdk_tgt.o 00:05:08.310 CXX test/cpp_headers/accel_module.o 00:05:08.310 CXX test/cpp_headers/assert.o 00:05:08.310 CXX test/cpp_headers/barrier.o 00:05:08.310 CXX test/cpp_headers/bdev.o 00:05:08.310 CXX test/cpp_headers/base64.o 00:05:08.310 CXX test/cpp_headers/bdev_zone.o 00:05:08.310 CXX test/cpp_headers/bdev_module.o 00:05:08.310 CXX test/cpp_headers/bit_array.o 00:05:08.310 CXX test/cpp_headers/blob_bdev.o 00:05:08.310 CXX test/cpp_headers/blobfs_bdev.o 00:05:08.310 CXX test/cpp_headers/bit_pool.o 00:05:08.310 CXX test/cpp_headers/blobfs.o 00:05:08.310 CXX test/cpp_headers/conf.o 00:05:08.310 CXX test/cpp_headers/blob.o 00:05:08.310 CXX test/cpp_headers/config.o 00:05:08.310 CXX test/cpp_headers/cpuset.o 00:05:08.310 CXX test/cpp_headers/crc16.o 00:05:08.310 CXX test/cpp_headers/crc32.o 00:05:08.310 CXX test/cpp_headers/dif.o 00:05:08.310 CXX test/cpp_headers/crc64.o 00:05:08.310 CXX test/cpp_headers/endian.o 00:05:08.310 CXX test/cpp_headers/env_dpdk.o 00:05:08.310 CXX test/cpp_headers/dma.o 00:05:08.310 CXX test/cpp_headers/event.o 00:05:08.310 CXX test/cpp_headers/env.o 00:05:08.310 CXX test/cpp_headers/fd_group.o 00:05:08.310 CXX test/cpp_headers/fd.o 00:05:08.310 CXX test/cpp_headers/file.o 00:05:08.310 CXX test/cpp_headers/ftl.o 00:05:08.310 CXX test/cpp_headers/gpt_spec.o 00:05:08.310 CXX test/cpp_headers/hexlify.o 00:05:08.310 CXX test/cpp_headers/idxd.o 00:05:08.310 CXX test/cpp_headers/histogram_data.o 00:05:08.310 CXX test/cpp_headers/idxd_spec.o 00:05:08.310 CXX test/cpp_headers/init.o 00:05:08.310 CXX test/cpp_headers/ioat.o 00:05:08.310 CXX test/cpp_headers/ioat_spec.o 00:05:08.310 CXX test/cpp_headers/iscsi_spec.o 00:05:08.310 CXX test/cpp_headers/json.o 00:05:08.310 CXX test/cpp_headers/jsonrpc.o 00:05:08.310 CXX test/cpp_headers/keyring.o 00:05:08.310 CXX test/cpp_headers/likely.o 00:05:08.310 CXX test/cpp_headers/keyring_module.o 00:05:08.310 CXX test/cpp_headers/log.o 00:05:08.310 CXX test/cpp_headers/lvol.o 00:05:08.310 CXX test/cpp_headers/memory.o 00:05:08.310 CXX test/cpp_headers/mmio.o 00:05:08.310 CXX test/cpp_headers/net.o 00:05:08.310 CXX test/cpp_headers/nbd.o 00:05:08.310 CXX test/cpp_headers/notify.o 00:05:08.310 CXX test/cpp_headers/nvme_ocssd.o 00:05:08.310 CXX test/cpp_headers/nvme_intel.o 00:05:08.310 CXX test/cpp_headers/nvme.o 00:05:08.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:08.310 CXX test/cpp_headers/nvme_spec.o 00:05:08.310 CXX test/cpp_headers/nvme_zns.o 00:05:08.310 CXX test/cpp_headers/nvmf_cmd.o 00:05:08.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:08.310 CXX test/cpp_headers/nvmf.o 00:05:08.310 CXX test/cpp_headers/nvmf_spec.o 00:05:08.310 CXX test/cpp_headers/nvmf_transport.o 00:05:08.310 CXX test/cpp_headers/opal.o 00:05:08.310 CXX test/cpp_headers/opal_spec.o 00:05:08.310 CXX test/cpp_headers/pci_ids.o 00:05:08.310 CXX test/cpp_headers/pipe.o 00:05:08.310 CXX test/cpp_headers/queue.o 00:05:08.310 CXX test/cpp_headers/reduce.o 00:05:08.310 CXX test/cpp_headers/rpc.o 00:05:08.310 CXX test/cpp_headers/scheduler.o 00:05:08.310 CC examples/util/zipf/zipf.o 00:05:08.310 CXX test/cpp_headers/scsi.o 00:05:08.310 CXX test/cpp_headers/sock.o 00:05:08.310 CXX test/cpp_headers/scsi_spec.o 00:05:08.310 CXX test/cpp_headers/stdinc.o 00:05:08.310 CXX test/cpp_headers/string.o 00:05:08.310 CXX test/cpp_headers/thread.o 00:05:08.310 CXX test/cpp_headers/trace.o 00:05:08.310 CXX test/cpp_headers/trace_parser.o 00:05:08.310 CXX test/cpp_headers/tree.o 00:05:08.310 CXX test/cpp_headers/ublk.o 00:05:08.310 CXX test/cpp_headers/util.o 00:05:08.310 CXX test/cpp_headers/uuid.o 00:05:08.310 CXX test/cpp_headers/version.o 00:05:08.310 CXX test/cpp_headers/vfio_user_pci.o 00:05:08.310 CC examples/ioat/verify/verify.o 00:05:08.310 CC test/thread/poller_perf/poller_perf.o 00:05:08.310 CC test/app/stub/stub.o 00:05:08.594 CC examples/ioat/perf/perf.o 00:05:08.594 CC test/app/jsoncat/jsoncat.o 00:05:08.594 CC test/app/histogram_perf/histogram_perf.o 00:05:08.594 CC test/env/pci/pci_ut.o 00:05:08.594 CXX test/cpp_headers/vfio_user_spec.o 00:05:08.594 CC test/env/memory/memory_ut.o 00:05:08.594 CC test/env/vtophys/vtophys.o 00:05:08.594 CXX test/cpp_headers/vhost.o 00:05:08.594 CC app/fio/nvme/fio_plugin.o 00:05:08.594 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:08.594 CXX test/cpp_headers/vmd.o 00:05:08.594 CC test/dma/test_dma/test_dma.o 00:05:08.594 CC test/app/bdev_svc/bdev_svc.o 00:05:08.594 LINK spdk_lspci 00:05:08.594 CC app/fio/bdev/fio_plugin.o 00:05:08.888 LINK rpc_client_test 00:05:08.888 LINK interrupt_tgt 00:05:08.888 LINK spdk_nvme_discover 00:05:08.888 LINK nvmf_tgt 00:05:09.151 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:09.151 CC test/env/mem_callbacks/mem_callbacks.o 00:05:09.151 LINK zipf 00:05:09.151 CXX test/cpp_headers/xor.o 00:05:09.151 LINK jsoncat 00:05:09.151 CXX test/cpp_headers/zipf.o 00:05:09.151 LINK spdk_trace_record 00:05:09.151 LINK histogram_perf 00:05:09.151 LINK poller_perf 00:05:09.151 LINK vtophys 00:05:09.151 LINK spdk_tgt 00:05:09.151 LINK iscsi_tgt 00:05:09.151 LINK stub 00:05:09.151 LINK env_dpdk_post_init 00:05:09.151 LINK verify 00:05:09.151 LINK bdev_svc 00:05:09.151 LINK ioat_perf 00:05:09.408 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:09.408 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:09.408 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:09.408 LINK spdk_dd 00:05:09.408 LINK spdk_trace 00:05:09.408 LINK pci_ut 00:05:09.408 LINK mem_callbacks 00:05:09.665 LINK nvme_fuzz 00:05:09.665 LINK test_dma 00:05:09.665 LINK spdk_bdev 00:05:09.665 LINK spdk_nvme 00:05:09.665 LINK spdk_nvme_perf 00:05:09.665 LINK spdk_nvme_identify 00:05:09.665 LINK vhost_fuzz 00:05:09.665 CC test/event/reactor/reactor.o 00:05:09.665 CC test/event/reactor_perf/reactor_perf.o 00:05:09.665 CC test/event/event_perf/event_perf.o 00:05:09.665 LINK memory_ut 00:05:09.665 LINK spdk_top 00:05:09.665 CC app/vhost/vhost.o 00:05:09.924 CC examples/idxd/perf/perf.o 00:05:09.924 CC examples/vmd/lsvmd/lsvmd.o 00:05:09.924 CC examples/vmd/led/led.o 00:05:09.924 CC test/event/scheduler/scheduler.o 00:05:09.924 CC examples/sock/hello_world/hello_sock.o 00:05:09.924 CC test/event/app_repeat/app_repeat.o 00:05:09.924 CC examples/thread/thread/thread_ex.o 00:05:09.924 LINK reactor 00:05:09.924 LINK reactor_perf 00:05:09.924 LINK event_perf 00:05:09.924 LINK lsvmd 00:05:09.924 LINK led 00:05:09.924 LINK app_repeat 00:05:09.924 LINK vhost 00:05:10.182 LINK scheduler 00:05:10.182 LINK hello_sock 00:05:10.182 LINK thread 00:05:10.182 LINK idxd_perf 00:05:10.182 CC test/nvme/aer/aer.o 00:05:10.182 CC test/nvme/err_injection/err_injection.o 00:05:10.182 CC test/nvme/cuse/cuse.o 00:05:10.182 CC test/nvme/overhead/overhead.o 00:05:10.182 CC test/nvme/sgl/sgl.o 00:05:10.182 CC test/nvme/reset/reset.o 00:05:10.182 CC test/nvme/startup/startup.o 00:05:10.182 CC test/nvme/e2edp/nvme_dp.o 00:05:10.182 CC test/nvme/connect_stress/connect_stress.o 00:05:10.182 CC test/nvme/reserve/reserve.o 00:05:10.182 CC test/nvme/fused_ordering/fused_ordering.o 00:05:10.182 CC test/nvme/boot_partition/boot_partition.o 00:05:10.182 CC test/nvme/compliance/nvme_compliance.o 00:05:10.182 CC test/nvme/simple_copy/simple_copy.o 00:05:10.182 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:10.182 CC test/nvme/fdp/fdp.o 00:05:10.182 CC test/accel/dif/dif.o 00:05:10.182 CC test/blobfs/mkfs/mkfs.o 00:05:10.440 CC test/lvol/esnap/esnap.o 00:05:10.440 LINK err_injection 00:05:10.440 LINK boot_partition 00:05:10.440 LINK startup 00:05:10.440 LINK reserve 00:05:10.440 LINK connect_stress 00:05:10.440 LINK doorbell_aers 00:05:10.440 LINK fused_ordering 00:05:10.440 LINK mkfs 00:05:10.440 LINK simple_copy 00:05:10.440 LINK aer 00:05:10.440 LINK reset 00:05:10.440 LINK sgl 00:05:10.440 LINK nvme_dp 00:05:10.440 LINK overhead 00:05:10.440 LINK nvme_compliance 00:05:10.698 LINK fdp 00:05:10.698 CC examples/nvme/hello_world/hello_world.o 00:05:10.698 LINK dif 00:05:10.698 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:10.698 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:10.698 CC examples/nvme/abort/abort.o 00:05:10.698 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:10.698 CC examples/nvme/arbitration/arbitration.o 00:05:10.698 CC examples/nvme/hotplug/hotplug.o 00:05:10.698 CC examples/nvme/reconnect/reconnect.o 00:05:10.698 CC examples/accel/perf/accel_perf.o 00:05:10.698 CC examples/blob/hello_world/hello_blob.o 00:05:10.698 CC examples/blob/cli/blobcli.o 00:05:10.956 LINK hello_world 00:05:10.956 LINK cmb_copy 00:05:10.956 LINK pmr_persistence 00:05:10.956 LINK iscsi_fuzz 00:05:10.956 LINK hotplug 00:05:10.956 LINK arbitration 00:05:10.956 LINK abort 00:05:11.214 LINK hello_blob 00:05:11.214 LINK nvme_manage 00:05:11.214 LINK accel_perf 00:05:11.214 LINK blobcli 00:05:11.214 CC test/bdev/bdevio/bdevio.o 00:05:11.471 LINK cuse 00:05:11.471 LINK reconnect 00:05:11.729 LINK bdevio 00:05:11.986 CC examples/bdev/hello_world/hello_bdev.o 00:05:11.986 CC examples/bdev/bdevperf/bdevperf.o 00:05:12.272 LINK hello_bdev 00:05:12.530 LINK bdevperf 00:05:13.464 CC examples/nvmf/nvmf/nvmf.o 00:05:13.722 LINK nvmf 00:05:15.098 LINK esnap 00:05:15.357 00:05:15.357 real 0m44.552s 00:05:15.357 user 12m58.662s 00:05:15.357 sys 3m41.266s 00:05:15.357 10:15:28 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:15.357 10:15:28 make -- common/autotest_common.sh@10 -- $ set +x 00:05:15.357 ************************************ 00:05:15.357 END TEST make 00:05:15.357 ************************************ 00:05:15.357 10:15:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:15.357 10:15:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:15.357 10:15:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:15.357 10:15:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.357 10:15:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:15.357 10:15:28 -- pm/common@44 -- $ pid=3119348 00:05:15.357 10:15:28 -- pm/common@50 -- $ kill -TERM 3119348 00:05:15.357 10:15:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.357 10:15:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:15.357 10:15:28 -- pm/common@44 -- $ pid=3119350 00:05:15.357 10:15:28 -- pm/common@50 -- $ kill -TERM 3119350 00:05:15.357 10:15:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.357 10:15:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:15.357 10:15:28 -- pm/common@44 -- $ pid=3119351 00:05:15.357 10:15:28 -- pm/common@50 -- $ kill -TERM 3119351 00:05:15.357 10:15:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.357 10:15:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:15.357 10:15:28 -- pm/common@44 -- $ pid=3119375 00:05:15.357 10:15:28 -- pm/common@50 -- $ sudo -E kill -TERM 3119375 00:05:15.357 10:15:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.357 10:15:28 -- nvmf/common.sh@7 -- # uname -s 00:05:15.357 10:15:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.357 10:15:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.357 10:15:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.357 10:15:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.357 10:15:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.357 10:15:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.357 10:15:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.357 10:15:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.357 10:15:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.357 10:15:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.357 10:15:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:05:15.357 10:15:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:05:15.357 10:15:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.357 10:15:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.357 10:15:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.357 10:15:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.357 10:15:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:15.357 10:15:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.357 10:15:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.357 10:15:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.357 10:15:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.357 10:15:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.357 10:15:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.357 10:15:28 -- paths/export.sh@5 -- # export PATH 00:05:15.357 10:15:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.357 10:15:28 -- nvmf/common.sh@47 -- # : 0 00:05:15.357 10:15:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:15.357 10:15:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:15.357 10:15:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.357 10:15:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.357 10:15:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.357 10:15:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:15.357 10:15:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:15.357 10:15:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:15.357 10:15:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:15.616 10:15:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:15.616 10:15:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:15.616 10:15:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:15.616 10:15:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:05:15.616 10:15:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:15.616 10:15:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:05:15.616 10:15:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:15.616 10:15:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:15.616 10:15:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:15.616 10:15:28 -- spdk/autotest.sh@48 -- # udevadm_pid=3228999 00:05:15.617 10:15:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:15.617 10:15:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:15.617 10:15:28 -- pm/common@17 -- # local monitor 00:05:15.617 10:15:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.617 10:15:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.617 10:15:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.617 10:15:28 -- pm/common@21 -- # date +%s 00:05:15.617 10:15:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:15.617 10:15:28 -- pm/common@21 -- # date +%s 00:05:15.617 10:15:28 -- pm/common@21 -- # date +%s 00:05:15.617 10:15:28 -- pm/common@25 -- # sleep 1 00:05:15.617 10:15:28 -- pm/common@21 -- # date +%s 00:05:15.617 10:15:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721981728 00:05:15.617 10:15:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721981728 00:05:15.617 10:15:28 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721981728 00:05:15.617 10:15:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721981728 00:05:15.617 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721981728_collect-vmstat.pm.log 00:05:15.617 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721981728_collect-cpu-temp.pm.log 00:05:15.617 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721981728_collect-cpu-load.pm.log 00:05:15.617 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721981728_collect-bmc-pm.bmc.pm.log 00:05:16.552 10:15:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:16.552 10:15:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:16.552 10:15:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.552 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.552 10:15:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:16.552 10:15:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:16.552 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.552 10:15:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:05:16.552 10:15:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:16.552 10:15:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:16.552 10:15:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:05:16.552 10:15:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:16.552 10:15:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:16.552 10:15:29 -- common/autotest_common.sh@1455 -- # uname 00:05:16.552 10:15:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:16.552 10:15:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:16.552 10:15:29 -- common/autotest_common.sh@1475 -- # uname 00:05:16.552 10:15:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:16.552 10:15:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:16.552 10:15:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:16.552 10:15:29 -- spdk/autotest.sh@72 -- # hash lcov 00:05:16.552 10:15:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:16.552 10:15:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:16.552 --rc lcov_branch_coverage=1 00:05:16.552 --rc lcov_function_coverage=1 00:05:16.552 --rc genhtml_branch_coverage=1 00:05:16.552 --rc genhtml_function_coverage=1 00:05:16.552 --rc genhtml_legend=1 00:05:16.552 --rc geninfo_all_blocks=1 00:05:16.552 ' 00:05:16.552 10:15:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:16.552 --rc lcov_branch_coverage=1 00:05:16.552 --rc lcov_function_coverage=1 00:05:16.552 --rc genhtml_branch_coverage=1 00:05:16.552 --rc genhtml_function_coverage=1 00:05:16.553 --rc genhtml_legend=1 00:05:16.553 --rc geninfo_all_blocks=1 00:05:16.553 ' 00:05:16.553 10:15:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:16.553 --rc lcov_branch_coverage=1 00:05:16.553 --rc lcov_function_coverage=1 00:05:16.553 --rc genhtml_branch_coverage=1 00:05:16.553 --rc genhtml_function_coverage=1 00:05:16.553 --rc genhtml_legend=1 00:05:16.553 --rc geninfo_all_blocks=1 00:05:16.553 --no-external' 00:05:16.553 10:15:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:16.553 --rc lcov_branch_coverage=1 00:05:16.553 --rc lcov_function_coverage=1 00:05:16.553 --rc genhtml_branch_coverage=1 00:05:16.553 --rc genhtml_function_coverage=1 00:05:16.553 --rc genhtml_legend=1 00:05:16.553 --rc geninfo_all_blocks=1 00:05:16.553 --no-external' 00:05:16.553 10:15:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:16.812 lcov: LCOV version 1.14 00:05:16.812 10:15:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:05:31.725 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:31.725 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:46.613 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:46.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:46.614 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:46.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:46.615 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:46.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:49.910 10:16:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:49.910 10:16:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.910 10:16:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.910 10:16:02 -- spdk/autotest.sh@91 -- # rm -f 00:05:49.910 10:16:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:54.099 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:54.099 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:05:54.099 10:16:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:54.099 10:16:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:54.099 10:16:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:54.099 10:16:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:54.099 10:16:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:54.099 10:16:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:54.099 10:16:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:54.099 10:16:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:54.099 10:16:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:54.099 10:16:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:54.099 10:16:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:54.099 10:16:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:54.099 10:16:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:54.099 10:16:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:54.099 10:16:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:54.358 No valid GPT data, bailing 00:05:54.358 10:16:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:54.358 10:16:07 -- scripts/common.sh@391 -- # pt= 00:05:54.358 10:16:07 -- scripts/common.sh@392 -- # return 1 00:05:54.358 10:16:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:54.358 1+0 records in 00:05:54.358 1+0 records out 00:05:54.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479601 s, 219 MB/s 00:05:54.358 10:16:07 -- spdk/autotest.sh@118 -- # sync 00:05:54.358 10:16:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:54.358 10:16:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:54.358 10:16:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:02.487 10:16:14 -- spdk/autotest.sh@124 -- # uname -s 00:06:02.487 10:16:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:02.487 10:16:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:06:02.487 10:16:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.487 10:16:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.487 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.487 ************************************ 00:06:02.487 START TEST setup.sh 00:06:02.487 ************************************ 00:06:02.487 10:16:14 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:06:02.487 * Looking for test storage... 00:06:02.487 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:02.487 10:16:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:02.487 10:16:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:02.487 10:16:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:06:02.487 10:16:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.487 10:16:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.487 10:16:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:02.487 ************************************ 00:06:02.487 START TEST acl 00:06:02.487 ************************************ 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:06:02.487 * Looking for test storage... 00:06:02.487 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:02.487 10:16:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:02.487 10:16:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:02.487 10:16:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:02.487 10:16:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.719 10:16:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:06.719 10:16:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:06.719 10:16:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:06.719 10:16:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:06.719 10:16:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.719 10:16:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:06:10.004 Hugepages 00:06:10.004 node hugesize free / total 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 00:06:10.004 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.004 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.263 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:10.264 10:16:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:10.264 10:16:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:10.264 10:16:23 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.264 10:16:23 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.264 10:16:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:10.264 ************************************ 00:06:10.264 START TEST denied 00:06:10.264 ************************************ 00:06:10.264 10:16:23 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:06:10.264 10:16:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:06:10.264 10:16:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:10.264 10:16:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:06:10.264 10:16:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.264 10:16:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:14.453 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:14.453 10:16:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:19.729 00:06:19.729 real 0m9.448s 00:06:19.729 user 0m2.877s 00:06:19.729 sys 0m5.794s 00:06:19.729 10:16:32 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.729 10:16:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:19.729 ************************************ 00:06:19.729 END TEST denied 00:06:19.729 ************************************ 00:06:19.989 10:16:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:19.989 10:16:32 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.989 10:16:32 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.989 10:16:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:19.989 ************************************ 00:06:19.989 START TEST allowed 00:06:19.989 ************************************ 00:06:19.989 10:16:32 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:19.989 10:16:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:06:19.989 10:16:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:19.989 10:16:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:06:19.989 10:16:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.989 10:16:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:06:26.553 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.553 10:16:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:06:26.553 10:16:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:26.553 10:16:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:26.553 10:16:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:26.553 10:16:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:30.771 00:06:30.771 real 0m10.975s 00:06:30.771 user 0m3.092s 00:06:30.771 sys 0m6.155s 00:06:30.771 10:16:43 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.771 10:16:43 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:30.771 ************************************ 00:06:30.771 END TEST allowed 00:06:30.771 ************************************ 00:06:31.030 00:06:31.030 real 0m29.305s 00:06:31.030 user 0m9.009s 00:06:31.030 sys 0m18.074s 00:06:31.030 10:16:43 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.030 10:16:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:31.030 ************************************ 00:06:31.030 END TEST acl 00:06:31.030 ************************************ 00:06:31.030 10:16:43 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:31.030 10:16:43 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.030 10:16:43 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.030 10:16:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:31.030 ************************************ 00:06:31.030 START TEST hugepages 00:06:31.030 ************************************ 00:06:31.030 10:16:43 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:06:31.030 * Looking for test storage... 00:06:31.030 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 39847824 kB' 'MemAvailable: 43850100 kB' 'Buffers: 6064 kB' 'Cached: 12116288 kB' 'SwapCached: 0 kB' 'Active: 8948036 kB' 'Inactive: 3689560 kB' 'Active(anon): 8549612 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518604 kB' 'Mapped: 187420 kB' 'Shmem: 8034368 kB' 'KReclaimable: 562000 kB' 'Slab: 1221784 kB' 'SReclaimable: 562000 kB' 'SUnreclaim: 659784 kB' 'KernelStack: 22096 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 10028864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219312 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.030 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.031 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:31.290 10:16:43 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:31.290 10:16:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.290 10:16:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.290 10:16:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:31.290 ************************************ 00:06:31.290 START TEST default_setup 00:06:31.290 ************************************ 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:31.290 10:16:43 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:35.482 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:35.482 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:35.741 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:37.649 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42018052 kB' 'MemAvailable: 46019528 kB' 'Buffers: 6064 kB' 'Cached: 12116436 kB' 'SwapCached: 0 kB' 'Active: 8966044 kB' 'Inactive: 3689560 kB' 'Active(anon): 8567620 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536216 kB' 'Mapped: 187344 kB' 'Shmem: 8034516 kB' 'KReclaimable: 561200 kB' 'Slab: 1218496 kB' 'SReclaimable: 561200 kB' 'SUnreclaim: 657296 kB' 'KernelStack: 22128 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10047952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219232 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.649 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.650 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42017092 kB' 'MemAvailable: 46018568 kB' 'Buffers: 6064 kB' 'Cached: 12116440 kB' 'SwapCached: 0 kB' 'Active: 8965892 kB' 'Inactive: 3689560 kB' 'Active(anon): 8567468 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536424 kB' 'Mapped: 187044 kB' 'Shmem: 8034520 kB' 'KReclaimable: 561200 kB' 'Slab: 1218600 kB' 'SReclaimable: 561200 kB' 'SUnreclaim: 657400 kB' 'KernelStack: 22064 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10047968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219264 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.651 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.652 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42014900 kB' 'MemAvailable: 46016376 kB' 'Buffers: 6064 kB' 'Cached: 12116464 kB' 'SwapCached: 0 kB' 'Active: 8966852 kB' 'Inactive: 3689560 kB' 'Active(anon): 8568428 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537372 kB' 'Mapped: 187044 kB' 'Shmem: 8034544 kB' 'KReclaimable: 561200 kB' 'Slab: 1218600 kB' 'SReclaimable: 561200 kB' 'SUnreclaim: 657400 kB' 'KernelStack: 22240 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10048124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219248 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.653 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.654 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.655 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:37.916 nr_hugepages=1024 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:37.916 resv_hugepages=0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:37.916 surplus_hugepages=0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:37.916 anon_hugepages=0 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42013328 kB' 'MemAvailable: 46014804 kB' 'Buffers: 6064 kB' 'Cached: 12116484 kB' 'SwapCached: 0 kB' 'Active: 8966472 kB' 'Inactive: 3689560 kB' 'Active(anon): 8568048 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536992 kB' 'Mapped: 187044 kB' 'Shmem: 8034564 kB' 'KReclaimable: 561200 kB' 'Slab: 1218600 kB' 'SReclaimable: 561200 kB' 'SUnreclaim: 657400 kB' 'KernelStack: 22224 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10048148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219312 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.916 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.917 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24992652 kB' 'MemUsed: 7646488 kB' 'SwapCached: 0 kB' 'Active: 3257756 kB' 'Inactive: 231284 kB' 'Active(anon): 3124708 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3125820 kB' 'Mapped: 80112 kB' 'AnonPages: 366328 kB' 'Shmem: 2761488 kB' 'KernelStack: 11336 kB' 'PageTables: 5312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222416 kB' 'Slab: 536092 kB' 'SReclaimable: 222416 kB' 'SUnreclaim: 313676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.918 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.919 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:37.920 node0=1024 expecting 1024 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:37.920 00:06:37.920 real 0m6.618s 00:06:37.920 user 0m1.711s 00:06:37.920 sys 0m3.079s 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.920 10:16:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:37.920 ************************************ 00:06:37.920 END TEST default_setup 00:06:37.920 ************************************ 00:06:37.920 10:16:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:37.920 10:16:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.920 10:16:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.920 10:16:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:37.920 ************************************ 00:06:37.920 START TEST per_node_1G_alloc 00:06:37.920 ************************************ 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:37.920 10:16:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:41.210 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:41.210 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:41.474 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:41.474 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:41.474 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:41.474 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42004020 kB' 'MemAvailable: 46005400 kB' 'Buffers: 6064 kB' 'Cached: 12116592 kB' 'SwapCached: 0 kB' 'Active: 8965252 kB' 'Inactive: 3689560 kB' 'Active(anon): 8566828 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535508 kB' 'Mapped: 185980 kB' 'Shmem: 8034672 kB' 'KReclaimable: 561104 kB' 'Slab: 1218480 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657376 kB' 'KernelStack: 22112 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10036484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.474 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42004456 kB' 'MemAvailable: 46005836 kB' 'Buffers: 6064 kB' 'Cached: 12116596 kB' 'SwapCached: 0 kB' 'Active: 8965208 kB' 'Inactive: 3689560 kB' 'Active(anon): 8566784 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535360 kB' 'Mapped: 185876 kB' 'Shmem: 8034676 kB' 'KReclaimable: 561104 kB' 'Slab: 1218512 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657408 kB' 'KernelStack: 22112 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10036504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.475 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.476 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42004744 kB' 'MemAvailable: 46006124 kB' 'Buffers: 6064 kB' 'Cached: 12116612 kB' 'SwapCached: 0 kB' 'Active: 8965404 kB' 'Inactive: 3689560 kB' 'Active(anon): 8566980 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535564 kB' 'Mapped: 185876 kB' 'Shmem: 8034692 kB' 'KReclaimable: 561104 kB' 'Slab: 1218512 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657408 kB' 'KernelStack: 22128 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10036524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.477 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:41.478 nr_hugepages=1024 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:41.478 resv_hugepages=0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:41.478 surplus_hugepages=0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:41.478 anon_hugepages=0 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42004744 kB' 'MemAvailable: 46006124 kB' 'Buffers: 6064 kB' 'Cached: 12116636 kB' 'SwapCached: 0 kB' 'Active: 8965304 kB' 'Inactive: 3689560 kB' 'Active(anon): 8566880 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535420 kB' 'Mapped: 185876 kB' 'Shmem: 8034716 kB' 'KReclaimable: 561104 kB' 'Slab: 1218512 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657408 kB' 'KernelStack: 22096 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10036548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219280 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:41.478 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.479 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.480 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.480 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.480 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.480 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.480 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26036332 kB' 'MemUsed: 6602808 kB' 'SwapCached: 0 kB' 'Active: 3256044 kB' 'Inactive: 231284 kB' 'Active(anon): 3122996 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3125832 kB' 'Mapped: 79320 kB' 'AnonPages: 364652 kB' 'Shmem: 2761500 kB' 'KernelStack: 11192 kB' 'PageTables: 4896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 535992 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 313672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.741 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.742 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15969020 kB' 'MemUsed: 11687060 kB' 'SwapCached: 0 kB' 'Active: 5709412 kB' 'Inactive: 3458276 kB' 'Active(anon): 5444036 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8996912 kB' 'Mapped: 106556 kB' 'AnonPages: 170916 kB' 'Shmem: 5273260 kB' 'KernelStack: 10920 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 338784 kB' 'Slab: 682520 kB' 'SReclaimable: 338784 kB' 'SUnreclaim: 343736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.743 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:41.744 node0=512 expecting 512 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:41.744 node1=512 expecting 512 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:41.744 00:06:41.744 real 0m3.740s 00:06:41.744 user 0m1.246s 00:06:41.744 sys 0m2.473s 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.744 10:16:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:41.744 ************************************ 00:06:41.744 END TEST per_node_1G_alloc 00:06:41.744 ************************************ 00:06:41.744 10:16:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:41.744 10:16:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.744 10:16:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.744 10:16:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:41.744 ************************************ 00:06:41.744 START TEST even_2G_alloc 00:06:41.744 ************************************ 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:41.744 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:41.745 10:16:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:45.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:45.941 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41972048 kB' 'MemAvailable: 45973428 kB' 'Buffers: 6064 kB' 'Cached: 12116756 kB' 'SwapCached: 0 kB' 'Active: 8967000 kB' 'Inactive: 3689560 kB' 'Active(anon): 8568576 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537080 kB' 'Mapped: 185920 kB' 'Shmem: 8034836 kB' 'KReclaimable: 561104 kB' 'Slab: 1218384 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657280 kB' 'KernelStack: 22080 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10040196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.941 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.942 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41972156 kB' 'MemAvailable: 45973536 kB' 'Buffers: 6064 kB' 'Cached: 12116760 kB' 'SwapCached: 0 kB' 'Active: 8966028 kB' 'Inactive: 3689560 kB' 'Active(anon): 8567604 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536032 kB' 'Mapped: 185900 kB' 'Shmem: 8034840 kB' 'KReclaimable: 561104 kB' 'Slab: 1218392 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657288 kB' 'KernelStack: 22032 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10040216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219344 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.943 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41970280 kB' 'MemAvailable: 45971660 kB' 'Buffers: 6064 kB' 'Cached: 12116776 kB' 'SwapCached: 0 kB' 'Active: 8966632 kB' 'Inactive: 3689560 kB' 'Active(anon): 8568208 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536632 kB' 'Mapped: 185900 kB' 'Shmem: 8034856 kB' 'KReclaimable: 561104 kB' 'Slab: 1218392 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657288 kB' 'KernelStack: 22176 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10040236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219360 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.944 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.945 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:45.946 nr_hugepages=1024 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:45.946 resv_hugepages=0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:45.946 surplus_hugepages=0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:45.946 anon_hugepages=0 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41969896 kB' 'MemAvailable: 45971276 kB' 'Buffers: 6064 kB' 'Cached: 12116796 kB' 'SwapCached: 0 kB' 'Active: 8966408 kB' 'Inactive: 3689560 kB' 'Active(anon): 8567984 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536380 kB' 'Mapped: 185900 kB' 'Shmem: 8034876 kB' 'KReclaimable: 561104 kB' 'Slab: 1218392 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657288 kB' 'KernelStack: 22112 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10040256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.946 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:45.947 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26009672 kB' 'MemUsed: 6629468 kB' 'SwapCached: 0 kB' 'Active: 3258108 kB' 'Inactive: 231284 kB' 'Active(anon): 3125060 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3125868 kB' 'Mapped: 79320 kB' 'AnonPages: 366640 kB' 'Shmem: 2761536 kB' 'KernelStack: 11240 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 536120 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 313800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.948 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15957636 kB' 'MemUsed: 11698444 kB' 'SwapCached: 0 kB' 'Active: 5708348 kB' 'Inactive: 3458276 kB' 'Active(anon): 5442972 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8997036 kB' 'Mapped: 106580 kB' 'AnonPages: 169784 kB' 'Shmem: 5273384 kB' 'KernelStack: 10888 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 338784 kB' 'Slab: 682272 kB' 'SReclaimable: 338784 kB' 'SUnreclaim: 343488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.949 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:45.950 node0=512 expecting 512 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:45.950 node1=512 expecting 512 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:45.950 00:06:45.950 real 0m4.039s 00:06:45.950 user 0m1.378s 00:06:45.950 sys 0m2.690s 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.950 10:16:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:45.950 ************************************ 00:06:45.950 END TEST even_2G_alloc 00:06:45.950 ************************************ 00:06:45.950 10:16:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:45.950 10:16:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.950 10:16:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.950 10:16:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:45.950 ************************************ 00:06:45.950 START TEST odd_alloc 00:06:45.950 ************************************ 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.950 10:16:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:50.148 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:50.148 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41970760 kB' 'MemAvailable: 45972140 kB' 'Buffers: 6064 kB' 'Cached: 12116932 kB' 'SwapCached: 0 kB' 'Active: 8968656 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570232 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538512 kB' 'Mapped: 185928 kB' 'Shmem: 8035012 kB' 'KReclaimable: 561104 kB' 'Slab: 1219164 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 658060 kB' 'KernelStack: 22160 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10041140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219280 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.148 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.149 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41967136 kB' 'MemAvailable: 45968516 kB' 'Buffers: 6064 kB' 'Cached: 12116936 kB' 'SwapCached: 0 kB' 'Active: 8969124 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570700 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539068 kB' 'Mapped: 185852 kB' 'Shmem: 8035016 kB' 'KReclaimable: 561104 kB' 'Slab: 1219132 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 658028 kB' 'KernelStack: 22192 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10041156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219392 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.150 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41965480 kB' 'MemAvailable: 45966860 kB' 'Buffers: 6064 kB' 'Cached: 12116952 kB' 'SwapCached: 0 kB' 'Active: 8968980 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570556 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538836 kB' 'Mapped: 185912 kB' 'Shmem: 8035032 kB' 'KReclaimable: 561104 kB' 'Slab: 1219140 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 658036 kB' 'KernelStack: 22176 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10041176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219360 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.151 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.152 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:50.153 nr_hugepages=1025 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:50.153 resv_hugepages=0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:50.153 surplus_hugepages=0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:50.153 anon_hugepages=0 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.153 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41964696 kB' 'MemAvailable: 45966076 kB' 'Buffers: 6064 kB' 'Cached: 12116972 kB' 'SwapCached: 0 kB' 'Active: 8969184 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570760 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539060 kB' 'Mapped: 185972 kB' 'Shmem: 8035052 kB' 'KReclaimable: 561104 kB' 'Slab: 1219044 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 657940 kB' 'KernelStack: 22224 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10040832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219344 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.154 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.416 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26022368 kB' 'MemUsed: 6616772 kB' 'SwapCached: 0 kB' 'Active: 3259996 kB' 'Inactive: 231284 kB' 'Active(anon): 3126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3125924 kB' 'Mapped: 79320 kB' 'AnonPages: 368560 kB' 'Shmem: 2761592 kB' 'KernelStack: 11128 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 536584 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 314264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.417 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15941708 kB' 'MemUsed: 11714372 kB' 'SwapCached: 0 kB' 'Active: 5709272 kB' 'Inactive: 3458276 kB' 'Active(anon): 5443896 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8997152 kB' 'Mapped: 106592 kB' 'AnonPages: 170500 kB' 'Shmem: 5273500 kB' 'KernelStack: 11064 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 338784 kB' 'Slab: 682460 kB' 'SReclaimable: 338784 kB' 'SUnreclaim: 343676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.418 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.419 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:50.420 node0=512 expecting 513 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:50.420 node1=513 expecting 512 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:50.420 00:06:50.420 real 0m4.491s 00:06:50.420 user 0m1.621s 00:06:50.420 sys 0m2.954s 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.420 10:17:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:50.420 ************************************ 00:06:50.420 END TEST odd_alloc 00:06:50.420 ************************************ 00:06:50.420 10:17:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:50.420 10:17:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.420 10:17:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.420 10:17:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:50.420 ************************************ 00:06:50.420 START TEST custom_alloc 00:06:50.420 ************************************ 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:50.420 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:50.421 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:50.421 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:50.421 10:17:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:50.421 10:17:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:50.421 10:17:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:54.609 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:54.609 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:54.609 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:54.609 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:54.610 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40901172 kB' 'MemAvailable: 44902552 kB' 'Buffers: 6064 kB' 'Cached: 12117104 kB' 'SwapCached: 0 kB' 'Active: 8970024 kB' 'Inactive: 3689560 kB' 'Active(anon): 8571600 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539160 kB' 'Mapped: 186020 kB' 'Shmem: 8035184 kB' 'KReclaimable: 561104 kB' 'Slab: 1217716 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 656612 kB' 'KernelStack: 22080 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10039080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.610 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40901792 kB' 'MemAvailable: 44903172 kB' 'Buffers: 6064 kB' 'Cached: 12117104 kB' 'SwapCached: 0 kB' 'Active: 8970288 kB' 'Inactive: 3689560 kB' 'Active(anon): 8571864 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539420 kB' 'Mapped: 185988 kB' 'Shmem: 8035184 kB' 'KReclaimable: 561104 kB' 'Slab: 1217716 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 656612 kB' 'KernelStack: 22064 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10039096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.611 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.612 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40902668 kB' 'MemAvailable: 44904048 kB' 'Buffers: 6064 kB' 'Cached: 12117124 kB' 'SwapCached: 0 kB' 'Active: 8969308 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570884 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538900 kB' 'Mapped: 185908 kB' 'Shmem: 8035204 kB' 'KReclaimable: 561104 kB' 'Slab: 1217708 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 656604 kB' 'KernelStack: 22048 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10039120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.613 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.614 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:54.615 nr_hugepages=1536 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:54.615 resv_hugepages=0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:54.615 surplus_hugepages=0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:54.615 anon_hugepages=0 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.615 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40906700 kB' 'MemAvailable: 44908080 kB' 'Buffers: 6064 kB' 'Cached: 12117140 kB' 'SwapCached: 0 kB' 'Active: 8969620 kB' 'Inactive: 3689560 kB' 'Active(anon): 8571196 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539196 kB' 'Mapped: 185908 kB' 'Shmem: 8035220 kB' 'KReclaimable: 561104 kB' 'Slab: 1217708 kB' 'SReclaimable: 561104 kB' 'SUnreclaim: 656604 kB' 'KernelStack: 22048 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10039140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.616 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26026792 kB' 'MemUsed: 6612348 kB' 'SwapCached: 0 kB' 'Active: 3258580 kB' 'Inactive: 231284 kB' 'Active(anon): 3125532 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3126020 kB' 'Mapped: 79320 kB' 'AnonPages: 366968 kB' 'Shmem: 2761688 kB' 'KernelStack: 11160 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 535736 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 313416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:54.617 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.618 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.619 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 14879840 kB' 'MemUsed: 12776240 kB' 'SwapCached: 0 kB' 'Active: 5710764 kB' 'Inactive: 3458276 kB' 'Active(anon): 5445388 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8997212 kB' 'Mapped: 106588 kB' 'AnonPages: 171924 kB' 'Shmem: 5273560 kB' 'KernelStack: 10888 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 338784 kB' 'Slab: 681948 kB' 'SReclaimable: 338784 kB' 'SUnreclaim: 343164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.620 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:54.621 node0=512 expecting 512 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:54.621 node1=1024 expecting 1024 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:54.621 00:06:54.621 real 0m4.197s 00:06:54.621 user 0m1.588s 00:06:54.621 sys 0m2.680s 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.621 10:17:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 ************************************ 00:06:54.621 END TEST custom_alloc 00:06:54.621 ************************************ 00:06:54.621 10:17:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:54.621 10:17:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.621 10:17:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.621 10:17:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 ************************************ 00:06:54.621 START TEST no_shrink_alloc 00:06:54.621 ************************************ 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:54.621 10:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:58.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:58.817 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41931004 kB' 'MemAvailable: 45932320 kB' 'Buffers: 6064 kB' 'Cached: 12117264 kB' 'SwapCached: 0 kB' 'Active: 8969280 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570856 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538264 kB' 'Mapped: 186024 kB' 'Shmem: 8035344 kB' 'KReclaimable: 561040 kB' 'Slab: 1217684 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 656644 kB' 'KernelStack: 22144 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10043084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219488 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.817 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:58.818 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41930244 kB' 'MemAvailable: 45931560 kB' 'Buffers: 6064 kB' 'Cached: 12117268 kB' 'SwapCached: 0 kB' 'Active: 8971292 kB' 'Inactive: 3689560 kB' 'Active(anon): 8572868 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540228 kB' 'Mapped: 186024 kB' 'Shmem: 8035348 kB' 'KReclaimable: 561040 kB' 'Slab: 1217680 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 656640 kB' 'KernelStack: 22160 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10098128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219424 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.819 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:58.820 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41933720 kB' 'MemAvailable: 45935036 kB' 'Buffers: 6064 kB' 'Cached: 12117288 kB' 'SwapCached: 0 kB' 'Active: 8969360 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570936 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538900 kB' 'Mapped: 185948 kB' 'Shmem: 8035368 kB' 'KReclaimable: 561040 kB' 'Slab: 1217560 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 656520 kB' 'KernelStack: 22128 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10042760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219360 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.821 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.822 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:58.823 nr_hugepages=1024 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:58.823 resv_hugepages=0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:58.823 surplus_hugepages=0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:58.823 anon_hugepages=0 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41934216 kB' 'MemAvailable: 45935532 kB' 'Buffers: 6064 kB' 'Cached: 12117308 kB' 'SwapCached: 0 kB' 'Active: 8968920 kB' 'Inactive: 3689560 kB' 'Active(anon): 8570496 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538832 kB' 'Mapped: 185932 kB' 'Shmem: 8035388 kB' 'KReclaimable: 561040 kB' 'Slab: 1217560 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 656520 kB' 'KernelStack: 22112 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10042780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219328 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.823 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.824 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24980480 kB' 'MemUsed: 7658660 kB' 'SwapCached: 0 kB' 'Active: 3259968 kB' 'Inactive: 231284 kB' 'Active(anon): 3126920 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3126136 kB' 'Mapped: 79320 kB' 'AnonPages: 368228 kB' 'Shmem: 2761804 kB' 'KernelStack: 11128 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 536052 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 313732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.825 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:58.826 node0=1024 expecting 1024 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.826 10:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:07:03.023 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:03.023 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:03.023 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:03.023 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:03.023 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:03.024 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:03.024 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41952700 kB' 'MemAvailable: 45954016 kB' 'Buffers: 6064 kB' 'Cached: 12117424 kB' 'SwapCached: 0 kB' 'Active: 8970508 kB' 'Inactive: 3689560 kB' 'Active(anon): 8572084 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539908 kB' 'Mapped: 186008 kB' 'Shmem: 8035504 kB' 'KReclaimable: 561040 kB' 'Slab: 1218004 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 656964 kB' 'KernelStack: 22032 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10042796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219328 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.024 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41954920 kB' 'MemAvailable: 45956236 kB' 'Buffers: 6064 kB' 'Cached: 12117428 kB' 'SwapCached: 0 kB' 'Active: 8970196 kB' 'Inactive: 3689560 kB' 'Active(anon): 8571772 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539584 kB' 'Mapped: 185940 kB' 'Shmem: 8035508 kB' 'KReclaimable: 561040 kB' 'Slab: 1218060 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 657020 kB' 'KernelStack: 22144 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10042820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219312 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.025 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.026 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41956872 kB' 'MemAvailable: 45958188 kB' 'Buffers: 6064 kB' 'Cached: 12117444 kB' 'SwapCached: 0 kB' 'Active: 8969880 kB' 'Inactive: 3689560 kB' 'Active(anon): 8571456 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539196 kB' 'Mapped: 185940 kB' 'Shmem: 8035524 kB' 'KReclaimable: 561040 kB' 'Slab: 1218060 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 657020 kB' 'KernelStack: 22176 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10043852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219376 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.027 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.028 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:03.029 nr_hugepages=1024 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:03.029 resv_hugepages=0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:03.029 surplus_hugepages=0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:03.029 anon_hugepages=0 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:03.029 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41954140 kB' 'MemAvailable: 45955456 kB' 'Buffers: 6064 kB' 'Cached: 12117468 kB' 'SwapCached: 0 kB' 'Active: 8970480 kB' 'Inactive: 3689560 kB' 'Active(anon): 8572056 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539808 kB' 'Mapped: 185940 kB' 'Shmem: 8035548 kB' 'KReclaimable: 561040 kB' 'Slab: 1218060 kB' 'SReclaimable: 561040 kB' 'SUnreclaim: 657020 kB' 'KernelStack: 22208 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10044112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219424 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3474804 kB' 'DirectMap2M: 19279872 kB' 'DirectMap1G: 46137344 kB' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.030 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:03.031 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24985580 kB' 'MemUsed: 7653560 kB' 'SwapCached: 0 kB' 'Active: 3261204 kB' 'Inactive: 231284 kB' 'Active(anon): 3128156 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3126188 kB' 'Mapped: 79320 kB' 'AnonPages: 369412 kB' 'Shmem: 2761856 kB' 'KernelStack: 11352 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 222320 kB' 'Slab: 536172 kB' 'SReclaimable: 222320 kB' 'SUnreclaim: 313852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.032 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.033 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:03.034 node0=1024 expecting 1024 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:03.034 00:07:03.034 real 0m8.244s 00:07:03.034 user 0m2.909s 00:07:03.034 sys 0m5.407s 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.034 10:17:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:03.034 ************************************ 00:07:03.034 END TEST no_shrink_alloc 00:07:03.034 ************************************ 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:03.034 10:17:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:03.034 00:07:03.034 real 0m32.013s 00:07:03.034 user 0m10.710s 00:07:03.034 sys 0m19.764s 00:07:03.034 10:17:15 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.034 10:17:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:03.034 ************************************ 00:07:03.034 END TEST hugepages 00:07:03.034 ************************************ 00:07:03.034 10:17:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:07:03.034 10:17:15 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.034 10:17:15 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.034 10:17:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:03.034 ************************************ 00:07:03.034 START TEST driver 00:07:03.034 ************************************ 00:07:03.034 10:17:15 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:07:03.294 * Looking for test storage... 00:07:03.294 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:07:03.294 10:17:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:03.294 10:17:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:03.294 10:17:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:09.869 10:17:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:09.869 10:17:21 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.869 10:17:21 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.869 10:17:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:09.869 ************************************ 00:07:09.869 START TEST guess_driver 00:07:09.869 ************************************ 00:07:09.869 10:17:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:09.870 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:09.870 Looking for driver=vfio-pci 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:09.870 10:17:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:13.163 10:17:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:15.071 10:17:27 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:20.348 00:07:20.348 real 0m11.392s 00:07:20.348 user 0m2.931s 00:07:20.348 sys 0m5.799s 00:07:20.348 10:17:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.348 10:17:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:20.348 ************************************ 00:07:20.348 END TEST guess_driver 00:07:20.348 ************************************ 00:07:20.348 00:07:20.348 real 0m17.179s 00:07:20.348 user 0m4.603s 00:07:20.348 sys 0m9.054s 00:07:20.348 10:17:33 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.348 10:17:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:20.348 ************************************ 00:07:20.348 END TEST driver 00:07:20.348 ************************************ 00:07:20.348 10:17:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:07:20.348 10:17:33 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.348 10:17:33 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.348 10:17:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:20.348 ************************************ 00:07:20.348 START TEST devices 00:07:20.348 ************************************ 00:07:20.348 10:17:33 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:07:20.348 * Looking for test storage... 00:07:20.348 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:07:20.348 10:17:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:20.348 10:17:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:20.348 10:17:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:20.348 10:17:33 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:25.619 10:17:37 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:25.619 No valid GPT data, bailing 00:07:25.619 10:17:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:25.619 10:17:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:25.619 10:17:37 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:25.619 10:17:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.619 10:17:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:25.619 ************************************ 00:07:25.619 START TEST nvme_mount 00:07:25.619 ************************************ 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:25.619 10:17:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:26.187 Creating new GPT entries in memory. 00:07:26.187 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:26.187 other utilities. 00:07:26.187 10:17:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:26.187 10:17:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:26.187 10:17:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:26.187 10:17:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:26.187 10:17:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:27.566 Creating new GPT entries in memory. 00:07:27.566 The operation has completed successfully. 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3270580 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:27.566 10:17:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.760 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:31.761 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:31.761 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:32.020 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:32.020 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:07:32.020 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:32.020 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:32.020 10:17:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:36.217 10:17:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:40.448 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:40.448 00:07:40.448 real 0m14.905s 00:07:40.448 user 0m4.301s 00:07:40.448 sys 0m8.532s 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.448 10:17:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:40.448 ************************************ 00:07:40.448 END TEST nvme_mount 00:07:40.448 ************************************ 00:07:40.448 10:17:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:40.448 10:17:52 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.448 10:17:52 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.448 10:17:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:40.448 ************************************ 00:07:40.448 START TEST dm_mount 00:07:40.448 ************************************ 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:40.448 10:17:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:41.385 Creating new GPT entries in memory. 00:07:41.385 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:41.385 other utilities. 00:07:41.385 10:17:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:41.385 10:17:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:41.385 10:17:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:41.385 10:17:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:41.385 10:17:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:42.323 Creating new GPT entries in memory. 00:07:42.323 The operation has completed successfully. 00:07:42.323 10:17:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:42.323 10:17:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:42.323 10:17:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:42.323 10:17:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:42.323 10:17:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:43.261 The operation has completed successfully. 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3275803 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:43.261 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:43.520 10:17:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:07:47.712 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:47.713 10:18:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.906 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:51.907 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:51.907 00:07:51.907 real 0m11.592s 00:07:51.907 user 0m2.940s 00:07:51.907 sys 0m5.779s 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.907 10:18:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:51.907 ************************************ 00:07:51.907 END TEST dm_mount 00:07:51.907 ************************************ 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:51.907 10:18:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:52.166 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:52.166 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:07:52.166 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:52.166 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:52.166 10:18:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:52.166 00:07:52.166 real 0m31.793s 00:07:52.166 user 0m9.016s 00:07:52.166 sys 0m17.779s 00:07:52.166 10:18:04 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.166 10:18:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:52.166 ************************************ 00:07:52.166 END TEST devices 00:07:52.166 ************************************ 00:07:52.166 00:07:52.166 real 1m50.727s 00:07:52.166 user 0m33.477s 00:07:52.166 sys 1m5.006s 00:07:52.166 10:18:04 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.166 10:18:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:52.166 ************************************ 00:07:52.166 END TEST setup.sh 00:07:52.166 ************************************ 00:07:52.166 10:18:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:07:56.362 Hugepages 00:07:56.362 node hugesize free / total 00:07:56.362 node0 1048576kB 0 / 0 00:07:56.362 node0 2048kB 1024 / 1024 00:07:56.362 node1 1048576kB 0 / 0 00:07:56.362 node1 2048kB 1024 / 1024 00:07:56.362 00:07:56.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:56.362 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:56.362 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:56.362 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:56.362 10:18:09 -- spdk/autotest.sh@130 -- # uname -s 00:07:56.362 10:18:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:56.362 10:18:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:56.362 10:18:09 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:08:00.559 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:00.559 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:00.816 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:02.746 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:08:02.746 10:18:15 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:04.123 10:18:16 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:04.123 10:18:16 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:04.123 10:18:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:04.123 10:18:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:04.123 10:18:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:04.123 10:18:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:04.123 10:18:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:04.123 10:18:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:04.123 10:18:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:04.123 10:18:16 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:04.123 10:18:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:08:04.123 10:18:16 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:08:08.313 Waiting for block devices as requested 00:08:08.313 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:08.313 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:08.572 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:08.572 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:08.572 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:08.831 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:08.831 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:08.831 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:09.089 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:09.089 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:09.089 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:09.347 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:09.347 10:18:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:09.347 10:18:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:08:09.347 10:18:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:08:09.347 10:18:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:09.347 10:18:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:09.347 10:18:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:09.347 10:18:22 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:08:09.347 10:18:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:09.347 10:18:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:09.347 10:18:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:09.347 10:18:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:09.347 10:18:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:09.347 10:18:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:09.347 10:18:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:09.347 10:18:22 -- common/autotest_common.sh@1557 -- # continue 00:08:09.347 10:18:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:09.347 10:18:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.347 10:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.606 10:18:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:09.606 10:18:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.606 10:18:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.606 10:18:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:08:13.794 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:13.794 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:15.694 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:08:15.694 10:18:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:15.694 10:18:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.694 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.952 10:18:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:15.952 10:18:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:08:15.952 10:18:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:08:15.952 10:18:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:08:15.952 10:18:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:08:15.952 10:18:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:08:15.952 10:18:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:15.952 10:18:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:15.952 10:18:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:15.952 10:18:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:15.952 10:18:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:15.952 10:18:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:08:15.952 10:18:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:08:15.952 10:18:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:08:15.952 10:18:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:08:15.952 10:18:28 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:08:15.952 10:18:28 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:15.952 10:18:28 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:08:15.952 10:18:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:08:15.952 10:18:28 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:08:15.952 10:18:28 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:15.952 10:18:28 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3288083 00:08:15.952 10:18:28 -- common/autotest_common.sh@1598 -- # waitforlisten 3288083 00:08:15.952 10:18:28 -- common/autotest_common.sh@831 -- # '[' -z 3288083 ']' 00:08:15.952 10:18:28 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.952 10:18:28 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.952 10:18:28 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.952 10:18:28 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.952 10:18:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.952 [2024-07-26 10:18:28.785841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:15.952 [2024-07-26 10:18:28.785900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288083 ] 00:08:16.209 [2024-07-26 10:18:28.910788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.209 [2024-07-26 10:18:28.955959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.143 10:18:29 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.143 10:18:29 -- common/autotest_common.sh@864 -- # return 0 00:08:17.143 10:18:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:08:17.143 10:18:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:08:17.143 10:18:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:08:20.425 nvme0n1 00:08:20.425 10:18:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:20.425 [2024-07-26 10:18:32.981253] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:20.425 request: 00:08:20.425 { 00:08:20.425 "nvme_ctrlr_name": "nvme0", 00:08:20.425 "password": "test", 00:08:20.425 "method": "bdev_nvme_opal_revert", 00:08:20.425 "req_id": 1 00:08:20.425 } 00:08:20.425 Got JSON-RPC error response 00:08:20.425 response: 00:08:20.425 { 00:08:20.425 "code": -32602, 00:08:20.425 "message": "Invalid parameters" 00:08:20.425 } 00:08:20.425 10:18:32 -- common/autotest_common.sh@1604 -- # true 00:08:20.425 10:18:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:08:20.425 10:18:32 -- common/autotest_common.sh@1608 -- # killprocess 3288083 00:08:20.425 10:18:32 -- common/autotest_common.sh@950 -- # '[' -z 3288083 ']' 00:08:20.425 10:18:32 -- common/autotest_common.sh@954 -- # kill -0 3288083 00:08:20.425 10:18:32 -- common/autotest_common.sh@955 -- # uname 00:08:20.425 10:18:33 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.425 10:18:33 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3288083 00:08:20.425 10:18:33 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.425 10:18:33 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.425 10:18:33 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3288083' 00:08:20.425 killing process with pid 3288083 00:08:20.425 10:18:33 -- common/autotest_common.sh@969 -- # kill 3288083 00:08:20.425 10:18:33 -- common/autotest_common.sh@974 -- # wait 3288083 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.425 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.426 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:20.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:08:23.012 10:18:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:23.012 10:18:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:23.012 10:18:35 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:08:23.012 10:18:35 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:08:23.012 10:18:35 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:08:23.579 Restarting all devices. 00:08:30.146 lstat() error: No such file or directory 00:08:30.146 QAT Error: No GENERAL section found 00:08:30.146 Failed to configure qat_dev0 00:08:30.146 lstat() error: No such file or directory 00:08:30.146 QAT Error: No GENERAL section found 00:08:30.146 Failed to configure qat_dev1 00:08:30.146 lstat() error: No such file or directory 00:08:30.146 QAT Error: No GENERAL section found 00:08:30.146 Failed to configure qat_dev2 00:08:30.146 lstat() error: No such file or directory 00:08:30.146 QAT Error: No GENERAL section found 00:08:30.146 Failed to configure qat_dev3 00:08:30.146 lstat() error: No such file or directory 00:08:30.146 QAT Error: No GENERAL section found 00:08:30.146 Failed to configure qat_dev4 00:08:30.146 enable sriov 00:08:30.146 Checking status of all devices. 00:08:30.146 There is 5 QAT acceleration device(s) in the system: 00:08:30.146 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:08:30.146 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:08:30.146 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:08:30.146 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:08:30.146 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:08:30.146 0000:1a:00.0 set to 16 VFs 00:08:31.079 0000:1c:00.0 set to 16 VFs 00:08:32.014 0000:1e:00.0 set to 16 VFs 00:08:32.580 0000:3d:00.0 set to 16 VFs 00:08:33.528 0000:3f:00.0 set to 16 VFs 00:08:36.057 Properly configured the qat device with driver uio_pci_generic. 00:08:36.057 10:18:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:36.057 10:18:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.057 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.057 10:18:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:36.057 10:18:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:08:36.057 10:18:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.057 10:18:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.057 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.057 ************************************ 00:08:36.057 START TEST env 00:08:36.057 ************************************ 00:08:36.057 10:18:48 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:08:36.057 * Looking for test storage... 00:08:36.057 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:08:36.057 10:18:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:08:36.057 10:18:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.057 10:18:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.057 10:18:48 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.057 ************************************ 00:08:36.057 START TEST env_memory 00:08:36.057 ************************************ 00:08:36.057 10:18:48 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:08:36.057 00:08:36.057 00:08:36.057 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.057 http://cunit.sourceforge.net/ 00:08:36.057 00:08:36.057 00:08:36.057 Suite: memory 00:08:36.057 Test: alloc and free memory map ...[2024-07-26 10:18:48.793439] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:36.057 passed 00:08:36.057 Test: mem map translation ...[2024-07-26 10:18:48.820220] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:36.057 [2024-07-26 10:18:48.820243] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:36.057 [2024-07-26 10:18:48.820294] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:36.057 [2024-07-26 10:18:48.820306] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:36.057 passed 00:08:36.057 Test: mem map registration ...[2024-07-26 10:18:48.873409] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:36.057 [2024-07-26 10:18:48.873432] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:36.057 passed 00:08:36.057 Test: mem map adjacent registrations ...passed 00:08:36.057 00:08:36.058 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.058 suites 1 1 n/a 0 0 00:08:36.058 tests 4 4 4 0 0 00:08:36.058 asserts 152 152 152 0 n/a 00:08:36.058 00:08:36.058 Elapsed time = 0.185 seconds 00:08:36.058 00:08:36.058 real 0m0.199s 00:08:36.058 user 0m0.184s 00:08:36.058 sys 0m0.014s 00:08:36.058 10:18:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.058 10:18:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:36.058 ************************************ 00:08:36.058 END TEST env_memory 00:08:36.058 ************************************ 00:08:36.319 10:18:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:36.319 10:18:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.319 10:18:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.319 10:18:48 env -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 ************************************ 00:08:36.319 START TEST env_vtophys 00:08:36.319 ************************************ 00:08:36.319 10:18:49 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:36.319 EAL: lib.eal log level changed from notice to debug 00:08:36.319 EAL: Detected lcore 0 as core 0 on socket 0 00:08:36.319 EAL: Detected lcore 1 as core 1 on socket 0 00:08:36.319 EAL: Detected lcore 2 as core 2 on socket 0 00:08:36.319 EAL: Detected lcore 3 as core 3 on socket 0 00:08:36.319 EAL: Detected lcore 4 as core 4 on socket 0 00:08:36.319 EAL: Detected lcore 5 as core 5 on socket 0 00:08:36.319 EAL: Detected lcore 6 as core 6 on socket 0 00:08:36.319 EAL: Detected lcore 7 as core 8 on socket 0 00:08:36.319 EAL: Detected lcore 8 as core 9 on socket 0 00:08:36.319 EAL: Detected lcore 9 as core 10 on socket 0 00:08:36.319 EAL: Detected lcore 10 as core 11 on socket 0 00:08:36.319 EAL: Detected lcore 11 as core 12 on socket 0 00:08:36.319 EAL: Detected lcore 12 as core 13 on socket 0 00:08:36.319 EAL: Detected lcore 13 as core 14 on socket 0 00:08:36.319 EAL: Detected lcore 14 as core 16 on socket 0 00:08:36.319 EAL: Detected lcore 15 as core 17 on socket 0 00:08:36.319 EAL: Detected lcore 16 as core 18 on socket 0 00:08:36.319 EAL: Detected lcore 17 as core 19 on socket 0 00:08:36.319 EAL: Detected lcore 18 as core 20 on socket 0 00:08:36.319 EAL: Detected lcore 19 as core 21 on socket 0 00:08:36.319 EAL: Detected lcore 20 as core 22 on socket 0 00:08:36.319 EAL: Detected lcore 21 as core 24 on socket 0 00:08:36.319 EAL: Detected lcore 22 as core 25 on socket 0 00:08:36.319 EAL: Detected lcore 23 as core 26 on socket 0 00:08:36.319 EAL: Detected lcore 24 as core 27 on socket 0 00:08:36.319 EAL: Detected lcore 25 as core 28 on socket 0 00:08:36.319 EAL: Detected lcore 26 as core 29 on socket 0 00:08:36.319 EAL: Detected lcore 27 as core 30 on socket 0 00:08:36.319 EAL: Detected lcore 28 as core 0 on socket 1 00:08:36.319 EAL: Detected lcore 29 as core 1 on socket 1 00:08:36.319 EAL: Detected lcore 30 as core 2 on socket 1 00:08:36.319 EAL: Detected lcore 31 as core 3 on socket 1 00:08:36.319 EAL: Detected lcore 32 as core 4 on socket 1 00:08:36.319 EAL: Detected lcore 33 as core 5 on socket 1 00:08:36.319 EAL: Detected lcore 34 as core 6 on socket 1 00:08:36.319 EAL: Detected lcore 35 as core 8 on socket 1 00:08:36.319 EAL: Detected lcore 36 as core 9 on socket 1 00:08:36.319 EAL: Detected lcore 37 as core 10 on socket 1 00:08:36.319 EAL: Detected lcore 38 as core 11 on socket 1 00:08:36.319 EAL: Detected lcore 39 as core 12 on socket 1 00:08:36.319 EAL: Detected lcore 40 as core 13 on socket 1 00:08:36.319 EAL: Detected lcore 41 as core 14 on socket 1 00:08:36.319 EAL: Detected lcore 42 as core 16 on socket 1 00:08:36.319 EAL: Detected lcore 43 as core 17 on socket 1 00:08:36.319 EAL: Detected lcore 44 as core 18 on socket 1 00:08:36.319 EAL: Detected lcore 45 as core 19 on socket 1 00:08:36.319 EAL: Detected lcore 46 as core 20 on socket 1 00:08:36.319 EAL: Detected lcore 47 as core 21 on socket 1 00:08:36.319 EAL: Detected lcore 48 as core 22 on socket 1 00:08:36.319 EAL: Detected lcore 49 as core 24 on socket 1 00:08:36.319 EAL: Detected lcore 50 as core 25 on socket 1 00:08:36.319 EAL: Detected lcore 51 as core 26 on socket 1 00:08:36.319 EAL: Detected lcore 52 as core 27 on socket 1 00:08:36.319 EAL: Detected lcore 53 as core 28 on socket 1 00:08:36.319 EAL: Detected lcore 54 as core 29 on socket 1 00:08:36.319 EAL: Detected lcore 55 as core 30 on socket 1 00:08:36.319 EAL: Detected lcore 56 as core 0 on socket 0 00:08:36.319 EAL: Detected lcore 57 as core 1 on socket 0 00:08:36.319 EAL: Detected lcore 58 as core 2 on socket 0 00:08:36.319 EAL: Detected lcore 59 as core 3 on socket 0 00:08:36.319 EAL: Detected lcore 60 as core 4 on socket 0 00:08:36.319 EAL: Detected lcore 61 as core 5 on socket 0 00:08:36.319 EAL: Detected lcore 62 as core 6 on socket 0 00:08:36.319 EAL: Detected lcore 63 as core 8 on socket 0 00:08:36.319 EAL: Detected lcore 64 as core 9 on socket 0 00:08:36.319 EAL: Detected lcore 65 as core 10 on socket 0 00:08:36.319 EAL: Detected lcore 66 as core 11 on socket 0 00:08:36.319 EAL: Detected lcore 67 as core 12 on socket 0 00:08:36.319 EAL: Detected lcore 68 as core 13 on socket 0 00:08:36.319 EAL: Detected lcore 69 as core 14 on socket 0 00:08:36.319 EAL: Detected lcore 70 as core 16 on socket 0 00:08:36.319 EAL: Detected lcore 71 as core 17 on socket 0 00:08:36.319 EAL: Detected lcore 72 as core 18 on socket 0 00:08:36.319 EAL: Detected lcore 73 as core 19 on socket 0 00:08:36.319 EAL: Detected lcore 74 as core 20 on socket 0 00:08:36.320 EAL: Detected lcore 75 as core 21 on socket 0 00:08:36.320 EAL: Detected lcore 76 as core 22 on socket 0 00:08:36.320 EAL: Detected lcore 77 as core 24 on socket 0 00:08:36.320 EAL: Detected lcore 78 as core 25 on socket 0 00:08:36.320 EAL: Detected lcore 79 as core 26 on socket 0 00:08:36.320 EAL: Detected lcore 80 as core 27 on socket 0 00:08:36.320 EAL: Detected lcore 81 as core 28 on socket 0 00:08:36.320 EAL: Detected lcore 82 as core 29 on socket 0 00:08:36.320 EAL: Detected lcore 83 as core 30 on socket 0 00:08:36.320 EAL: Detected lcore 84 as core 0 on socket 1 00:08:36.320 EAL: Detected lcore 85 as core 1 on socket 1 00:08:36.320 EAL: Detected lcore 86 as core 2 on socket 1 00:08:36.320 EAL: Detected lcore 87 as core 3 on socket 1 00:08:36.320 EAL: Detected lcore 88 as core 4 on socket 1 00:08:36.320 EAL: Detected lcore 89 as core 5 on socket 1 00:08:36.320 EAL: Detected lcore 90 as core 6 on socket 1 00:08:36.320 EAL: Detected lcore 91 as core 8 on socket 1 00:08:36.320 EAL: Detected lcore 92 as core 9 on socket 1 00:08:36.320 EAL: Detected lcore 93 as core 10 on socket 1 00:08:36.320 EAL: Detected lcore 94 as core 11 on socket 1 00:08:36.320 EAL: Detected lcore 95 as core 12 on socket 1 00:08:36.320 EAL: Detected lcore 96 as core 13 on socket 1 00:08:36.320 EAL: Detected lcore 97 as core 14 on socket 1 00:08:36.320 EAL: Detected lcore 98 as core 16 on socket 1 00:08:36.320 EAL: Detected lcore 99 as core 17 on socket 1 00:08:36.320 EAL: Detected lcore 100 as core 18 on socket 1 00:08:36.320 EAL: Detected lcore 101 as core 19 on socket 1 00:08:36.320 EAL: Detected lcore 102 as core 20 on socket 1 00:08:36.320 EAL: Detected lcore 103 as core 21 on socket 1 00:08:36.320 EAL: Detected lcore 104 as core 22 on socket 1 00:08:36.320 EAL: Detected lcore 105 as core 24 on socket 1 00:08:36.320 EAL: Detected lcore 106 as core 25 on socket 1 00:08:36.320 EAL: Detected lcore 107 as core 26 on socket 1 00:08:36.320 EAL: Detected lcore 108 as core 27 on socket 1 00:08:36.320 EAL: Detected lcore 109 as core 28 on socket 1 00:08:36.320 EAL: Detected lcore 110 as core 29 on socket 1 00:08:36.320 EAL: Detected lcore 111 as core 30 on socket 1 00:08:36.320 EAL: Maximum logical cores by configuration: 128 00:08:36.320 EAL: Detected CPU lcores: 112 00:08:36.320 EAL: Detected NUMA nodes: 2 00:08:36.320 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:36.320 EAL: Detected shared linkage of DPDK 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:36.320 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:36.320 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so.23.0 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_auxiliary.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_mlx5.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_common_qat.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_ipsec_mb.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_crypto_mlx5.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_isal.so 00:08:36.320 EAL: open shared lib /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_compress_mlx5.so 00:08:36.320 EAL: No shared files mode enabled, IPC will be disabled 00:08:36.320 EAL: No shared files mode enabled, IPC is disabled 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:08:36.320 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:08:36.321 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:08:36.321 EAL: Bus pci wants IOVA as 'PA' 00:08:36.321 EAL: Bus auxiliary wants IOVA as 'DC' 00:08:36.321 EAL: Bus vdev wants IOVA as 'DC' 00:08:36.321 EAL: Selected IOVA mode 'PA' 00:08:36.321 EAL: Probing VFIO support... 00:08:36.321 EAL: IOMMU type 1 (Type 1) is supported 00:08:36.321 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:36.321 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:36.321 EAL: VFIO support initialized 00:08:36.321 EAL: Ask a virtual area of 0x2e000 bytes 00:08:36.321 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:36.321 EAL: Setting up physically contiguous memory... 00:08:36.321 EAL: Setting maximum number of open files to 524288 00:08:36.321 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:36.321 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:36.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:36.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:36.321 EAL: Ask a virtual area of 0x61000 bytes 00:08:36.321 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:36.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:36.321 EAL: Ask a virtual area of 0x400000000 bytes 00:08:36.321 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:36.321 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:36.321 EAL: Hugepages will be freed exactly as allocated. 00:08:36.321 EAL: No shared files mode enabled, IPC is disabled 00:08:36.321 EAL: No shared files mode enabled, IPC is disabled 00:08:36.321 EAL: TSC frequency is ~2500000 KHz 00:08:36.321 EAL: Main lcore 0 is ready (tid=7feb17c7bb00;cpuset=[0]) 00:08:36.321 EAL: Trying to obtain current memory policy. 00:08:36.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.321 EAL: Restoring previous memory policy: 0 00:08:36.321 EAL: request: mp_malloc_sync 00:08:36.321 EAL: No shared files mode enabled, IPC is disabled 00:08:36.321 EAL: Heap on socket 0 was expanded by 2MB 00:08:36.321 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001000000 00:08:36.321 EAL: PCI memory mapped at 0x202001001000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001002000 00:08:36.321 EAL: PCI memory mapped at 0x202001003000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001004000 00:08:36.321 EAL: PCI memory mapped at 0x202001005000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001006000 00:08:36.321 EAL: PCI memory mapped at 0x202001007000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001008000 00:08:36.321 EAL: PCI memory mapped at 0x202001009000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200100a000 00:08:36.321 EAL: PCI memory mapped at 0x20200100b000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200100c000 00:08:36.321 EAL: PCI memory mapped at 0x20200100d000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200100e000 00:08:36.321 EAL: PCI memory mapped at 0x20200100f000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001010000 00:08:36.321 EAL: PCI memory mapped at 0x202001011000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001012000 00:08:36.321 EAL: PCI memory mapped at 0x202001013000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001014000 00:08:36.321 EAL: PCI memory mapped at 0x202001015000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001016000 00:08:36.321 EAL: PCI memory mapped at 0x202001017000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x202001018000 00:08:36.321 EAL: PCI memory mapped at 0x202001019000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200101a000 00:08:36.321 EAL: PCI memory mapped at 0x20200101b000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200101c000 00:08:36.321 EAL: PCI memory mapped at 0x20200101d000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:08:36.321 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:08:36.321 EAL: probe driver: 8086:37c9 qat 00:08:36.321 EAL: PCI memory mapped at 0x20200101e000 00:08:36.321 EAL: PCI memory mapped at 0x20200101f000 00:08:36.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:08:36.321 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001020000 00:08:36.322 EAL: PCI memory mapped at 0x202001021000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001022000 00:08:36.322 EAL: PCI memory mapped at 0x202001023000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001024000 00:08:36.322 EAL: PCI memory mapped at 0x202001025000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001026000 00:08:36.322 EAL: PCI memory mapped at 0x202001027000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001028000 00:08:36.322 EAL: PCI memory mapped at 0x202001029000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200102a000 00:08:36.322 EAL: PCI memory mapped at 0x20200102b000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200102c000 00:08:36.322 EAL: PCI memory mapped at 0x20200102d000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200102e000 00:08:36.322 EAL: PCI memory mapped at 0x20200102f000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001030000 00:08:36.322 EAL: PCI memory mapped at 0x202001031000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001032000 00:08:36.322 EAL: PCI memory mapped at 0x202001033000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001034000 00:08:36.322 EAL: PCI memory mapped at 0x202001035000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001036000 00:08:36.322 EAL: PCI memory mapped at 0x202001037000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001038000 00:08:36.322 EAL: PCI memory mapped at 0x202001039000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200103a000 00:08:36.322 EAL: PCI memory mapped at 0x20200103b000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200103c000 00:08:36.322 EAL: PCI memory mapped at 0x20200103d000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:08:36.322 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200103e000 00:08:36.322 EAL: PCI memory mapped at 0x20200103f000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001040000 00:08:36.322 EAL: PCI memory mapped at 0x202001041000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001042000 00:08:36.322 EAL: PCI memory mapped at 0x202001043000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001044000 00:08:36.322 EAL: PCI memory mapped at 0x202001045000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001046000 00:08:36.322 EAL: PCI memory mapped at 0x202001047000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001048000 00:08:36.322 EAL: PCI memory mapped at 0x202001049000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200104a000 00:08:36.322 EAL: PCI memory mapped at 0x20200104b000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200104c000 00:08:36.322 EAL: PCI memory mapped at 0x20200104d000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200104e000 00:08:36.322 EAL: PCI memory mapped at 0x20200104f000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001050000 00:08:36.322 EAL: PCI memory mapped at 0x202001051000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001052000 00:08:36.322 EAL: PCI memory mapped at 0x202001053000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001054000 00:08:36.322 EAL: PCI memory mapped at 0x202001055000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001056000 00:08:36.322 EAL: PCI memory mapped at 0x202001057000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x202001058000 00:08:36.322 EAL: PCI memory mapped at 0x202001059000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200105a000 00:08:36.322 EAL: PCI memory mapped at 0x20200105b000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200105c000 00:08:36.322 EAL: PCI memory mapped at 0x20200105d000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:08:36.322 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.322 EAL: PCI memory mapped at 0x20200105e000 00:08:36.322 EAL: PCI memory mapped at 0x20200105f000 00:08:36.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:08:36.322 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:08:36.322 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001060000 00:08:36.323 EAL: PCI memory mapped at 0x202001061000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001060000 00:08:36.323 EAL: PCI memory unmapped at 0x202001061000 00:08:36.323 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001062000 00:08:36.323 EAL: PCI memory mapped at 0x202001063000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001062000 00:08:36.323 EAL: PCI memory unmapped at 0x202001063000 00:08:36.323 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001064000 00:08:36.323 EAL: PCI memory mapped at 0x202001065000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001064000 00:08:36.323 EAL: PCI memory unmapped at 0x202001065000 00:08:36.323 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001066000 00:08:36.323 EAL: PCI memory mapped at 0x202001067000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001066000 00:08:36.323 EAL: PCI memory unmapped at 0x202001067000 00:08:36.323 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001068000 00:08:36.323 EAL: PCI memory mapped at 0x202001069000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001068000 00:08:36.323 EAL: PCI memory unmapped at 0x202001069000 00:08:36.323 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200106a000 00:08:36.323 EAL: PCI memory mapped at 0x20200106b000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200106a000 00:08:36.323 EAL: PCI memory unmapped at 0x20200106b000 00:08:36.323 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200106c000 00:08:36.323 EAL: PCI memory mapped at 0x20200106d000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200106c000 00:08:36.323 EAL: PCI memory unmapped at 0x20200106d000 00:08:36.323 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200106e000 00:08:36.323 EAL: PCI memory mapped at 0x20200106f000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200106e000 00:08:36.323 EAL: PCI memory unmapped at 0x20200106f000 00:08:36.323 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001070000 00:08:36.323 EAL: PCI memory mapped at 0x202001071000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001070000 00:08:36.323 EAL: PCI memory unmapped at 0x202001071000 00:08:36.323 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001072000 00:08:36.323 EAL: PCI memory mapped at 0x202001073000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001072000 00:08:36.323 EAL: PCI memory unmapped at 0x202001073000 00:08:36.323 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001074000 00:08:36.323 EAL: PCI memory mapped at 0x202001075000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001074000 00:08:36.323 EAL: PCI memory unmapped at 0x202001075000 00:08:36.323 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001076000 00:08:36.323 EAL: PCI memory mapped at 0x202001077000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001076000 00:08:36.323 EAL: PCI memory unmapped at 0x202001077000 00:08:36.323 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001078000 00:08:36.323 EAL: PCI memory mapped at 0x202001079000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001078000 00:08:36.323 EAL: PCI memory unmapped at 0x202001079000 00:08:36.323 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200107a000 00:08:36.323 EAL: PCI memory mapped at 0x20200107b000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200107a000 00:08:36.323 EAL: PCI memory unmapped at 0x20200107b000 00:08:36.323 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200107c000 00:08:36.323 EAL: PCI memory mapped at 0x20200107d000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200107c000 00:08:36.323 EAL: PCI memory unmapped at 0x20200107d000 00:08:36.323 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:36.323 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x20200107e000 00:08:36.323 EAL: PCI memory mapped at 0x20200107f000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x20200107e000 00:08:36.323 EAL: PCI memory unmapped at 0x20200107f000 00:08:36.323 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:36.323 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001080000 00:08:36.323 EAL: PCI memory mapped at 0x202001081000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001080000 00:08:36.323 EAL: PCI memory unmapped at 0x202001081000 00:08:36.323 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:36.323 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001082000 00:08:36.323 EAL: PCI memory mapped at 0x202001083000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001082000 00:08:36.323 EAL: PCI memory unmapped at 0x202001083000 00:08:36.323 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:36.323 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001084000 00:08:36.323 EAL: PCI memory mapped at 0x202001085000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001084000 00:08:36.323 EAL: PCI memory unmapped at 0x202001085000 00:08:36.323 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:36.323 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:08:36.323 EAL: probe driver: 8086:37c9 qat 00:08:36.323 EAL: PCI memory mapped at 0x202001086000 00:08:36.323 EAL: PCI memory mapped at 0x202001087000 00:08:36.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.323 EAL: PCI memory unmapped at 0x202001086000 00:08:36.324 EAL: PCI memory unmapped at 0x202001087000 00:08:36.324 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001088000 00:08:36.324 EAL: PCI memory mapped at 0x202001089000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001088000 00:08:36.324 EAL: PCI memory unmapped at 0x202001089000 00:08:36.324 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200108a000 00:08:36.324 EAL: PCI memory mapped at 0x20200108b000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x20200108a000 00:08:36.324 EAL: PCI memory unmapped at 0x20200108b000 00:08:36.324 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200108c000 00:08:36.324 EAL: PCI memory mapped at 0x20200108d000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x20200108c000 00:08:36.324 EAL: PCI memory unmapped at 0x20200108d000 00:08:36.324 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200108e000 00:08:36.324 EAL: PCI memory mapped at 0x20200108f000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x20200108e000 00:08:36.324 EAL: PCI memory unmapped at 0x20200108f000 00:08:36.324 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001090000 00:08:36.324 EAL: PCI memory mapped at 0x202001091000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001090000 00:08:36.324 EAL: PCI memory unmapped at 0x202001091000 00:08:36.324 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001092000 00:08:36.324 EAL: PCI memory mapped at 0x202001093000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001092000 00:08:36.324 EAL: PCI memory unmapped at 0x202001093000 00:08:36.324 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001094000 00:08:36.324 EAL: PCI memory mapped at 0x202001095000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001094000 00:08:36.324 EAL: PCI memory unmapped at 0x202001095000 00:08:36.324 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001096000 00:08:36.324 EAL: PCI memory mapped at 0x202001097000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001096000 00:08:36.324 EAL: PCI memory unmapped at 0x202001097000 00:08:36.324 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x202001098000 00:08:36.324 EAL: PCI memory mapped at 0x202001099000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x202001098000 00:08:36.324 EAL: PCI memory unmapped at 0x202001099000 00:08:36.324 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200109a000 00:08:36.324 EAL: PCI memory mapped at 0x20200109b000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x20200109a000 00:08:36.324 EAL: PCI memory unmapped at 0x20200109b000 00:08:36.324 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200109c000 00:08:36.324 EAL: PCI memory mapped at 0x20200109d000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:36.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.324 EAL: PCI memory unmapped at 0x20200109c000 00:08:36.324 EAL: PCI memory unmapped at 0x20200109d000 00:08:36.324 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:36.324 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:08:36.324 EAL: probe driver: 8086:37c9 qat 00:08:36.324 EAL: PCI memory mapped at 0x20200109e000 00:08:36.324 EAL: PCI memory mapped at 0x20200109f000 00:08:36.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:36.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.325 EAL: PCI memory unmapped at 0x20200109e000 00:08:36.325 EAL: PCI memory unmapped at 0x20200109f000 00:08:36.325 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:36.325 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:08:36.325 EAL: probe driver: 8086:37d2 net_i40e 00:08:36.325 EAL: Not managed by a supported kernel driver, skipped 00:08:36.325 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:08:36.325 EAL: probe driver: 8086:37d2 net_i40e 00:08:36.325 EAL: Not managed by a supported kernel driver, skipped 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:36.325 EAL: Mem event callback 'spdk:(nil)' registered 00:08:36.325 00:08:36.325 00:08:36.325 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.325 http://cunit.sourceforge.net/ 00:08:36.325 00:08:36.325 00:08:36.325 Suite: components_suite 00:08:36.325 Test: vtophys_malloc_test ...passed 00:08:36.325 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 4MB 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was shrunk by 4MB 00:08:36.325 EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 6MB 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was shrunk by 6MB 00:08:36.325 EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 10MB 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was shrunk by 10MB 00:08:36.325 EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 18MB 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was shrunk by 18MB 00:08:36.325 EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 34MB 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was shrunk by 34MB 00:08:36.325 EAL: Trying to obtain current memory policy. 00:08:36.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.325 EAL: Restoring previous memory policy: 4 00:08:36.325 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.325 EAL: request: mp_malloc_sync 00:08:36.325 EAL: No shared files mode enabled, IPC is disabled 00:08:36.325 EAL: Heap on socket 0 was expanded by 66MB 00:08:36.584 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.584 EAL: request: mp_malloc_sync 00:08:36.584 EAL: No shared files mode enabled, IPC is disabled 00:08:36.584 EAL: Heap on socket 0 was shrunk by 66MB 00:08:36.584 EAL: Trying to obtain current memory policy. 00:08:36.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.584 EAL: Restoring previous memory policy: 4 00:08:36.584 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.584 EAL: request: mp_malloc_sync 00:08:36.584 EAL: No shared files mode enabled, IPC is disabled 00:08:36.584 EAL: Heap on socket 0 was expanded by 130MB 00:08:36.584 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.584 EAL: request: mp_malloc_sync 00:08:36.584 EAL: No shared files mode enabled, IPC is disabled 00:08:36.584 EAL: Heap on socket 0 was shrunk by 130MB 00:08:36.584 EAL: Trying to obtain current memory policy. 00:08:36.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.584 EAL: Restoring previous memory policy: 4 00:08:36.584 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.584 EAL: request: mp_malloc_sync 00:08:36.584 EAL: No shared files mode enabled, IPC is disabled 00:08:36.584 EAL: Heap on socket 0 was expanded by 258MB 00:08:36.584 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.584 EAL: request: mp_malloc_sync 00:08:36.584 EAL: No shared files mode enabled, IPC is disabled 00:08:36.584 EAL: Heap on socket 0 was shrunk by 258MB 00:08:36.584 EAL: Trying to obtain current memory policy. 00:08:36.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:36.843 EAL: Restoring previous memory policy: 4 00:08:36.843 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.843 EAL: request: mp_malloc_sync 00:08:36.843 EAL: No shared files mode enabled, IPC is disabled 00:08:36.843 EAL: Heap on socket 0 was expanded by 514MB 00:08:36.843 EAL: Calling mem event callback 'spdk:(nil)' 00:08:36.843 EAL: request: mp_malloc_sync 00:08:36.843 EAL: No shared files mode enabled, IPC is disabled 00:08:36.843 EAL: Heap on socket 0 was shrunk by 514MB 00:08:36.843 EAL: Trying to obtain current memory policy. 00:08:36.843 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:37.101 EAL: Restoring previous memory policy: 4 00:08:37.101 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.101 EAL: request: mp_malloc_sync 00:08:37.101 EAL: No shared files mode enabled, IPC is disabled 00:08:37.101 EAL: Heap on socket 0 was expanded by 1026MB 00:08:37.360 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.360 EAL: request: mp_malloc_sync 00:08:37.360 EAL: No shared files mode enabled, IPC is disabled 00:08:37.360 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:37.360 passed 00:08:37.360 00:08:37.360 Run Summary: Type Total Ran Passed Failed Inactive 00:08:37.360 suites 1 1 n/a 0 0 00:08:37.360 tests 2 2 2 0 0 00:08:37.360 asserts 6492 6492 6492 0 n/a 00:08:37.360 00:08:37.360 Elapsed time = 1.017 seconds 00:08:37.360 EAL: No shared files mode enabled, IPC is disabled 00:08:37.360 EAL: No shared files mode enabled, IPC is disabled 00:08:37.360 EAL: No shared files mode enabled, IPC is disabled 00:08:37.360 00:08:37.360 real 0m1.217s 00:08:37.360 user 0m0.673s 00:08:37.360 sys 0m0.515s 00:08:37.360 10:18:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.360 10:18:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:37.360 ************************************ 00:08:37.360 END TEST env_vtophys 00:08:37.360 ************************************ 00:08:37.618 10:18:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:08:37.618 10:18:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.618 10:18:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.618 10:18:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:37.618 ************************************ 00:08:37.618 START TEST env_pci 00:08:37.618 ************************************ 00:08:37.618 10:18:50 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:08:37.618 00:08:37.618 00:08:37.618 CUnit - A unit testing framework for C - Version 2.1-3 00:08:37.618 http://cunit.sourceforge.net/ 00:08:37.618 00:08:37.618 00:08:37.618 Suite: pci 00:08:37.618 Test: pci_hook ...[2024-07-26 10:18:50.349497] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3292019 has claimed it 00:08:37.618 EAL: Cannot find device (10000:00:01.0) 00:08:37.618 EAL: Failed to attach device on primary process 00:08:37.618 passed 00:08:37.618 00:08:37.618 Run Summary: Type Total Ran Passed Failed Inactive 00:08:37.618 suites 1 1 n/a 0 0 00:08:37.618 tests 1 1 1 0 0 00:08:37.618 asserts 25 25 25 0 n/a 00:08:37.618 00:08:37.618 Elapsed time = 0.046 seconds 00:08:37.618 00:08:37.618 real 0m0.075s 00:08:37.618 user 0m0.019s 00:08:37.618 sys 0m0.055s 00:08:37.618 10:18:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.618 10:18:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:37.618 ************************************ 00:08:37.618 END TEST env_pci 00:08:37.618 ************************************ 00:08:37.618 10:18:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:37.618 10:18:50 env -- env/env.sh@15 -- # uname 00:08:37.618 10:18:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:37.618 10:18:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:37.618 10:18:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:37.618 10:18:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.618 10:18:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.618 10:18:50 env -- common/autotest_common.sh@10 -- # set +x 00:08:37.618 ************************************ 00:08:37.618 START TEST env_dpdk_post_init 00:08:37.618 ************************************ 00:08:37.618 10:18:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:37.879 EAL: Detected CPU lcores: 112 00:08:37.879 EAL: Detected NUMA nodes: 2 00:08:37.879 EAL: Detected shared linkage of DPDK 00:08:37.879 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:37.879 EAL: Selected IOVA mode 'PA' 00:08:37.879 EAL: VFIO support initialized 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.879 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:08:37.879 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.879 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:37.880 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:08:37.880 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.880 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:37.880 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:37.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:37.881 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:37.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.881 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:37.881 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:37.881 EAL: Using IOMMU type 1 (Type 1) 00:08:37.881 EAL: Ignore mapping IO port bar(1) 00:08:37.881 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:37.881 EAL: Ignore mapping IO port bar(1) 00:08:37.881 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:37.881 EAL: Ignore mapping IO port bar(1) 00:08:37.881 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:37.881 EAL: Ignore mapping IO port bar(1) 00:08:37.881 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:37.881 EAL: Ignore mapping IO port bar(1) 00:08:37.881 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:38.141 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:38.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:38.141 EAL: Ignore mapping IO port bar(1) 00:08:38.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:39.078 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:08:43.295 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:08:43.295 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:08:43.295 Starting DPDK initialization... 00:08:43.295 Starting SPDK post initialization... 00:08:43.295 SPDK NVMe probe 00:08:43.295 Attaching to 0000:d8:00.0 00:08:43.295 Attached to 0000:d8:00.0 00:08:43.295 Cleaning up... 00:08:43.295 00:08:43.295 real 0m5.438s 00:08:43.295 user 0m3.990s 00:08:43.295 sys 0m0.497s 00:08:43.295 10:18:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.295 10:18:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:43.295 ************************************ 00:08:43.295 END TEST env_dpdk_post_init 00:08:43.295 ************************************ 00:08:43.295 10:18:55 env -- env/env.sh@26 -- # uname 00:08:43.295 10:18:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:43.295 10:18:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:43.295 10:18:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.295 10:18:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.295 10:18:55 env -- common/autotest_common.sh@10 -- # set +x 00:08:43.295 ************************************ 00:08:43.295 START TEST env_mem_callbacks 00:08:43.295 ************************************ 00:08:43.295 10:18:56 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:43.295 EAL: Detected CPU lcores: 112 00:08:43.295 EAL: Detected NUMA nodes: 2 00:08:43.295 EAL: Detected shared linkage of DPDK 00:08:43.295 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:43.295 EAL: Selected IOVA mode 'PA' 00:08:43.295 EAL: VFIO support initialized 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.295 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:08:43.295 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.295 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.296 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.296 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:08:43.296 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:08:43.297 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:08:43.297 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:43.297 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:08:43.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.297 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:43.297 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:43.297 00:08:43.297 00:08:43.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.297 http://cunit.sourceforge.net/ 00:08:43.297 00:08:43.297 00:08:43.297 Suite: memory 00:08:43.297 Test: test ... 00:08:43.297 register 0x200000200000 2097152 00:08:43.297 malloc 3145728 00:08:43.297 register 0x200000400000 4194304 00:08:43.297 buf 0x200000500000 len 3145728 PASSED 00:08:43.297 malloc 64 00:08:43.297 buf 0x2000004fff40 len 64 PASSED 00:08:43.298 malloc 4194304 00:08:43.298 register 0x200000800000 6291456 00:08:43.298 buf 0x200000a00000 len 4194304 PASSED 00:08:43.298 free 0x200000500000 3145728 00:08:43.298 free 0x2000004fff40 64 00:08:43.298 unregister 0x200000400000 4194304 PASSED 00:08:43.298 free 0x200000a00000 4194304 00:08:43.298 unregister 0x200000800000 6291456 PASSED 00:08:43.298 malloc 8388608 00:08:43.298 register 0x200000400000 10485760 00:08:43.298 buf 0x200000600000 len 8388608 PASSED 00:08:43.298 free 0x200000600000 8388608 00:08:43.298 unregister 0x200000400000 10485760 PASSED 00:08:43.298 passed 00:08:43.298 00:08:43.298 Run Summary: Type Total Ran Passed Failed Inactive 00:08:43.298 suites 1 1 n/a 0 0 00:08:43.298 tests 1 1 1 0 0 00:08:43.298 asserts 15 15 15 0 n/a 00:08:43.298 00:08:43.298 Elapsed time = 0.006 seconds 00:08:43.298 00:08:43.298 real 0m0.112s 00:08:43.298 user 0m0.028s 00:08:43.298 sys 0m0.084s 00:08:43.298 10:18:56 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.298 10:18:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:43.298 ************************************ 00:08:43.298 END TEST env_mem_callbacks 00:08:43.298 ************************************ 00:08:43.298 00:08:43.298 real 0m7.574s 00:08:43.298 user 0m5.077s 00:08:43.298 sys 0m1.555s 00:08:43.298 10:18:56 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.298 10:18:56 env -- common/autotest_common.sh@10 -- # set +x 00:08:43.298 ************************************ 00:08:43.298 END TEST env 00:08:43.298 ************************************ 00:08:43.556 10:18:56 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:08:43.556 10:18:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.556 10:18:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.556 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.556 ************************************ 00:08:43.556 START TEST rpc 00:08:43.556 ************************************ 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:08:43.556 * Looking for test storage... 00:08:43.556 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:43.556 10:18:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3293132 00:08:43.556 10:18:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.556 10:18:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:43.556 10:18:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3293132 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 3293132 ']' 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.556 10:18:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.556 [2024-07-26 10:18:56.441361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:43.556 [2024-07-26 10:18:56.441432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293132 ] 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:43.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:43.815 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:43.815 [2024-07-26 10:18:56.578130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.815 [2024-07-26 10:18:56.622416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:43.815 [2024-07-26 10:18:56.622463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3293132' to capture a snapshot of events at runtime. 00:08:43.815 [2024-07-26 10:18:56.622477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.815 [2024-07-26 10:18:56.622489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.815 [2024-07-26 10:18:56.622498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3293132 for offline analysis/debug. 00:08:43.815 [2024-07-26 10:18:56.622527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.749 10:18:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.749 10:18:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:44.749 10:18:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:44.749 10:18:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:44.749 10:18:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:44.749 10:18:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:44.749 10:18:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.749 10:18:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.749 10:18:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.749 ************************************ 00:08:44.749 START TEST rpc_integrity 00:08:44.749 ************************************ 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.749 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.749 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:44.749 { 00:08:44.749 "name": "Malloc0", 00:08:44.749 "aliases": [ 00:08:44.749 "fef0cfd4-f146-4083-9b21-1c02c483308f" 00:08:44.749 ], 00:08:44.749 "product_name": "Malloc disk", 00:08:44.749 "block_size": 512, 00:08:44.749 "num_blocks": 16384, 00:08:44.749 "uuid": "fef0cfd4-f146-4083-9b21-1c02c483308f", 00:08:44.749 "assigned_rate_limits": { 00:08:44.749 "rw_ios_per_sec": 0, 00:08:44.749 "rw_mbytes_per_sec": 0, 00:08:44.749 "r_mbytes_per_sec": 0, 00:08:44.749 "w_mbytes_per_sec": 0 00:08:44.749 }, 00:08:44.749 "claimed": false, 00:08:44.749 "zoned": false, 00:08:44.749 "supported_io_types": { 00:08:44.749 "read": true, 00:08:44.749 "write": true, 00:08:44.749 "unmap": true, 00:08:44.749 "flush": true, 00:08:44.749 "reset": true, 00:08:44.749 "nvme_admin": false, 00:08:44.749 "nvme_io": false, 00:08:44.749 "nvme_io_md": false, 00:08:44.749 "write_zeroes": true, 00:08:44.749 "zcopy": true, 00:08:44.749 "get_zone_info": false, 00:08:44.749 "zone_management": false, 00:08:44.749 "zone_append": false, 00:08:44.749 "compare": false, 00:08:44.749 "compare_and_write": false, 00:08:44.749 "abort": true, 00:08:44.749 "seek_hole": false, 00:08:44.749 "seek_data": false, 00:08:44.749 "copy": true, 00:08:44.749 "nvme_iov_md": false 00:08:44.749 }, 00:08:44.750 "memory_domains": [ 00:08:44.750 { 00:08:44.750 "dma_device_id": "system", 00:08:44.750 "dma_device_type": 1 00:08:44.750 }, 00:08:44.750 { 00:08:44.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.750 "dma_device_type": 2 00:08:44.750 } 00:08:44.750 ], 00:08:44.750 "driver_specific": {} 00:08:44.750 } 00:08:44.750 ]' 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 [2024-07-26 10:18:57.509064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:44.750 [2024-07-26 10:18:57.509103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.750 [2024-07-26 10:18:57.509121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f3a70 00:08:44.750 [2024-07-26 10:18:57.509132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.750 [2024-07-26 10:18:57.510486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.750 [2024-07-26 10:18:57.510513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:44.750 Passthru0 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:44.750 { 00:08:44.750 "name": "Malloc0", 00:08:44.750 "aliases": [ 00:08:44.750 "fef0cfd4-f146-4083-9b21-1c02c483308f" 00:08:44.750 ], 00:08:44.750 "product_name": "Malloc disk", 00:08:44.750 "block_size": 512, 00:08:44.750 "num_blocks": 16384, 00:08:44.750 "uuid": "fef0cfd4-f146-4083-9b21-1c02c483308f", 00:08:44.750 "assigned_rate_limits": { 00:08:44.750 "rw_ios_per_sec": 0, 00:08:44.750 "rw_mbytes_per_sec": 0, 00:08:44.750 "r_mbytes_per_sec": 0, 00:08:44.750 "w_mbytes_per_sec": 0 00:08:44.750 }, 00:08:44.750 "claimed": true, 00:08:44.750 "claim_type": "exclusive_write", 00:08:44.750 "zoned": false, 00:08:44.750 "supported_io_types": { 00:08:44.750 "read": true, 00:08:44.750 "write": true, 00:08:44.750 "unmap": true, 00:08:44.750 "flush": true, 00:08:44.750 "reset": true, 00:08:44.750 "nvme_admin": false, 00:08:44.750 "nvme_io": false, 00:08:44.750 "nvme_io_md": false, 00:08:44.750 "write_zeroes": true, 00:08:44.750 "zcopy": true, 00:08:44.750 "get_zone_info": false, 00:08:44.750 "zone_management": false, 00:08:44.750 "zone_append": false, 00:08:44.750 "compare": false, 00:08:44.750 "compare_and_write": false, 00:08:44.750 "abort": true, 00:08:44.750 "seek_hole": false, 00:08:44.750 "seek_data": false, 00:08:44.750 "copy": true, 00:08:44.750 "nvme_iov_md": false 00:08:44.750 }, 00:08:44.750 "memory_domains": [ 00:08:44.750 { 00:08:44.750 "dma_device_id": "system", 00:08:44.750 "dma_device_type": 1 00:08:44.750 }, 00:08:44.750 { 00:08:44.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.750 "dma_device_type": 2 00:08:44.750 } 00:08:44.750 ], 00:08:44.750 "driver_specific": {} 00:08:44.750 }, 00:08:44.750 { 00:08:44.750 "name": "Passthru0", 00:08:44.750 "aliases": [ 00:08:44.750 "5062cc59-0f41-55d6-b1bb-0001a6389cbc" 00:08:44.750 ], 00:08:44.750 "product_name": "passthru", 00:08:44.750 "block_size": 512, 00:08:44.750 "num_blocks": 16384, 00:08:44.750 "uuid": "5062cc59-0f41-55d6-b1bb-0001a6389cbc", 00:08:44.750 "assigned_rate_limits": { 00:08:44.750 "rw_ios_per_sec": 0, 00:08:44.750 "rw_mbytes_per_sec": 0, 00:08:44.750 "r_mbytes_per_sec": 0, 00:08:44.750 "w_mbytes_per_sec": 0 00:08:44.750 }, 00:08:44.750 "claimed": false, 00:08:44.750 "zoned": false, 00:08:44.750 "supported_io_types": { 00:08:44.750 "read": true, 00:08:44.750 "write": true, 00:08:44.750 "unmap": true, 00:08:44.750 "flush": true, 00:08:44.750 "reset": true, 00:08:44.750 "nvme_admin": false, 00:08:44.750 "nvme_io": false, 00:08:44.750 "nvme_io_md": false, 00:08:44.750 "write_zeroes": true, 00:08:44.750 "zcopy": true, 00:08:44.750 "get_zone_info": false, 00:08:44.750 "zone_management": false, 00:08:44.750 "zone_append": false, 00:08:44.750 "compare": false, 00:08:44.750 "compare_and_write": false, 00:08:44.750 "abort": true, 00:08:44.750 "seek_hole": false, 00:08:44.750 "seek_data": false, 00:08:44.750 "copy": true, 00:08:44.750 "nvme_iov_md": false 00:08:44.750 }, 00:08:44.750 "memory_domains": [ 00:08:44.750 { 00:08:44.750 "dma_device_id": "system", 00:08:44.750 "dma_device_type": 1 00:08:44.750 }, 00:08:44.750 { 00:08:44.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.750 "dma_device_type": 2 00:08:44.750 } 00:08:44.750 ], 00:08:44.750 "driver_specific": { 00:08:44.750 "passthru": { 00:08:44.750 "name": "Passthru0", 00:08:44.750 "base_bdev_name": "Malloc0" 00:08:44.750 } 00:08:44.750 } 00:08:44.750 } 00:08:44.750 ]' 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:44.750 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:44.750 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:45.009 10:18:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:45.009 00:08:45.009 real 0m0.292s 00:08:45.009 user 0m0.190s 00:08:45.009 sys 0m0.046s 00:08:45.009 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 ************************************ 00:08:45.009 END TEST rpc_integrity 00:08:45.009 ************************************ 00:08:45.009 10:18:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:45.009 10:18:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.009 10:18:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.009 10:18:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 ************************************ 00:08:45.009 START TEST rpc_plugins 00:08:45.009 ************************************ 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:45.009 { 00:08:45.009 "name": "Malloc1", 00:08:45.009 "aliases": [ 00:08:45.009 "b704724b-2cc3-4f56-acff-6aa25622a93b" 00:08:45.009 ], 00:08:45.009 "product_name": "Malloc disk", 00:08:45.009 "block_size": 4096, 00:08:45.009 "num_blocks": 256, 00:08:45.009 "uuid": "b704724b-2cc3-4f56-acff-6aa25622a93b", 00:08:45.009 "assigned_rate_limits": { 00:08:45.009 "rw_ios_per_sec": 0, 00:08:45.009 "rw_mbytes_per_sec": 0, 00:08:45.009 "r_mbytes_per_sec": 0, 00:08:45.009 "w_mbytes_per_sec": 0 00:08:45.009 }, 00:08:45.009 "claimed": false, 00:08:45.009 "zoned": false, 00:08:45.009 "supported_io_types": { 00:08:45.009 "read": true, 00:08:45.009 "write": true, 00:08:45.009 "unmap": true, 00:08:45.009 "flush": true, 00:08:45.009 "reset": true, 00:08:45.009 "nvme_admin": false, 00:08:45.009 "nvme_io": false, 00:08:45.009 "nvme_io_md": false, 00:08:45.009 "write_zeroes": true, 00:08:45.009 "zcopy": true, 00:08:45.009 "get_zone_info": false, 00:08:45.009 "zone_management": false, 00:08:45.009 "zone_append": false, 00:08:45.009 "compare": false, 00:08:45.009 "compare_and_write": false, 00:08:45.009 "abort": true, 00:08:45.009 "seek_hole": false, 00:08:45.009 "seek_data": false, 00:08:45.009 "copy": true, 00:08:45.009 "nvme_iov_md": false 00:08:45.009 }, 00:08:45.009 "memory_domains": [ 00:08:45.009 { 00:08:45.009 "dma_device_id": "system", 00:08:45.009 "dma_device_type": 1 00:08:45.009 }, 00:08:45.009 { 00:08:45.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.009 "dma_device_type": 2 00:08:45.009 } 00:08:45.009 ], 00:08:45.009 "driver_specific": {} 00:08:45.009 } 00:08:45.009 ]' 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:45.009 10:18:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:45.009 00:08:45.009 real 0m0.139s 00:08:45.009 user 0m0.087s 00:08:45.009 sys 0m0.022s 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.009 10:18:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:45.009 ************************************ 00:08:45.009 END TEST rpc_plugins 00:08:45.009 ************************************ 00:08:45.268 10:18:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:45.268 10:18:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.268 10:18:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.268 10:18:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.268 ************************************ 00:08:45.268 START TEST rpc_trace_cmd_test 00:08:45.268 ************************************ 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:45.268 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3293132", 00:08:45.268 "tpoint_group_mask": "0x8", 00:08:45.268 "iscsi_conn": { 00:08:45.268 "mask": "0x2", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "scsi": { 00:08:45.268 "mask": "0x4", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "bdev": { 00:08:45.268 "mask": "0x8", 00:08:45.268 "tpoint_mask": "0xffffffffffffffff" 00:08:45.268 }, 00:08:45.268 "nvmf_rdma": { 00:08:45.268 "mask": "0x10", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "nvmf_tcp": { 00:08:45.268 "mask": "0x20", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "ftl": { 00:08:45.268 "mask": "0x40", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "blobfs": { 00:08:45.268 "mask": "0x80", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "dsa": { 00:08:45.268 "mask": "0x200", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "thread": { 00:08:45.268 "mask": "0x400", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "nvme_pcie": { 00:08:45.268 "mask": "0x800", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "iaa": { 00:08:45.268 "mask": "0x1000", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "nvme_tcp": { 00:08:45.268 "mask": "0x2000", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "bdev_nvme": { 00:08:45.268 "mask": "0x4000", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 }, 00:08:45.268 "sock": { 00:08:45.268 "mask": "0x8000", 00:08:45.268 "tpoint_mask": "0x0" 00:08:45.268 } 00:08:45.268 }' 00:08:45.268 10:18:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:45.268 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:45.528 10:18:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:45.528 00:08:45.528 real 0m0.238s 00:08:45.528 user 0m0.192s 00:08:45.528 sys 0m0.040s 00:08:45.528 10:18:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 ************************************ 00:08:45.528 END TEST rpc_trace_cmd_test 00:08:45.528 ************************************ 00:08:45.528 10:18:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:45.528 10:18:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:45.528 10:18:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:45.528 10:18:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.528 10:18:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.528 10:18:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 ************************************ 00:08:45.528 START TEST rpc_daemon_integrity 00:08:45.528 ************************************ 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:45.528 { 00:08:45.528 "name": "Malloc2", 00:08:45.528 "aliases": [ 00:08:45.528 "1ed1fa8b-17cc-4a56-a3c0-bdafb7496246" 00:08:45.528 ], 00:08:45.528 "product_name": "Malloc disk", 00:08:45.528 "block_size": 512, 00:08:45.528 "num_blocks": 16384, 00:08:45.528 "uuid": "1ed1fa8b-17cc-4a56-a3c0-bdafb7496246", 00:08:45.528 "assigned_rate_limits": { 00:08:45.528 "rw_ios_per_sec": 0, 00:08:45.528 "rw_mbytes_per_sec": 0, 00:08:45.528 "r_mbytes_per_sec": 0, 00:08:45.528 "w_mbytes_per_sec": 0 00:08:45.528 }, 00:08:45.528 "claimed": false, 00:08:45.528 "zoned": false, 00:08:45.528 "supported_io_types": { 00:08:45.528 "read": true, 00:08:45.528 "write": true, 00:08:45.528 "unmap": true, 00:08:45.528 "flush": true, 00:08:45.528 "reset": true, 00:08:45.528 "nvme_admin": false, 00:08:45.528 "nvme_io": false, 00:08:45.528 "nvme_io_md": false, 00:08:45.528 "write_zeroes": true, 00:08:45.528 "zcopy": true, 00:08:45.528 "get_zone_info": false, 00:08:45.528 "zone_management": false, 00:08:45.528 "zone_append": false, 00:08:45.528 "compare": false, 00:08:45.528 "compare_and_write": false, 00:08:45.528 "abort": true, 00:08:45.528 "seek_hole": false, 00:08:45.528 "seek_data": false, 00:08:45.528 "copy": true, 00:08:45.528 "nvme_iov_md": false 00:08:45.528 }, 00:08:45.528 "memory_domains": [ 00:08:45.528 { 00:08:45.528 "dma_device_id": "system", 00:08:45.528 "dma_device_type": 1 00:08:45.528 }, 00:08:45.528 { 00:08:45.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.528 "dma_device_type": 2 00:08:45.528 } 00:08:45.528 ], 00:08:45.528 "driver_specific": {} 00:08:45.528 } 00:08:45.528 ]' 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.528 [2024-07-26 10:18:58.419623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:45.528 [2024-07-26 10:18:58.419665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.528 [2024-07-26 10:18:58.419683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f5590 00:08:45.528 [2024-07-26 10:18:58.419694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.528 [2024-07-26 10:18:58.420942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.528 [2024-07-26 10:18:58.420968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:45.528 Passthru0 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.528 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:45.788 { 00:08:45.788 "name": "Malloc2", 00:08:45.788 "aliases": [ 00:08:45.788 "1ed1fa8b-17cc-4a56-a3c0-bdafb7496246" 00:08:45.788 ], 00:08:45.788 "product_name": "Malloc disk", 00:08:45.788 "block_size": 512, 00:08:45.788 "num_blocks": 16384, 00:08:45.788 "uuid": "1ed1fa8b-17cc-4a56-a3c0-bdafb7496246", 00:08:45.788 "assigned_rate_limits": { 00:08:45.788 "rw_ios_per_sec": 0, 00:08:45.788 "rw_mbytes_per_sec": 0, 00:08:45.788 "r_mbytes_per_sec": 0, 00:08:45.788 "w_mbytes_per_sec": 0 00:08:45.788 }, 00:08:45.788 "claimed": true, 00:08:45.788 "claim_type": "exclusive_write", 00:08:45.788 "zoned": false, 00:08:45.788 "supported_io_types": { 00:08:45.788 "read": true, 00:08:45.788 "write": true, 00:08:45.788 "unmap": true, 00:08:45.788 "flush": true, 00:08:45.788 "reset": true, 00:08:45.788 "nvme_admin": false, 00:08:45.788 "nvme_io": false, 00:08:45.788 "nvme_io_md": false, 00:08:45.788 "write_zeroes": true, 00:08:45.788 "zcopy": true, 00:08:45.788 "get_zone_info": false, 00:08:45.788 "zone_management": false, 00:08:45.788 "zone_append": false, 00:08:45.788 "compare": false, 00:08:45.788 "compare_and_write": false, 00:08:45.788 "abort": true, 00:08:45.788 "seek_hole": false, 00:08:45.788 "seek_data": false, 00:08:45.788 "copy": true, 00:08:45.788 "nvme_iov_md": false 00:08:45.788 }, 00:08:45.788 "memory_domains": [ 00:08:45.788 { 00:08:45.788 "dma_device_id": "system", 00:08:45.788 "dma_device_type": 1 00:08:45.788 }, 00:08:45.788 { 00:08:45.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.788 "dma_device_type": 2 00:08:45.788 } 00:08:45.788 ], 00:08:45.788 "driver_specific": {} 00:08:45.788 }, 00:08:45.788 { 00:08:45.788 "name": "Passthru0", 00:08:45.788 "aliases": [ 00:08:45.788 "c2a4c2b1-9537-50b5-9abd-18d7fb40aadf" 00:08:45.788 ], 00:08:45.788 "product_name": "passthru", 00:08:45.788 "block_size": 512, 00:08:45.788 "num_blocks": 16384, 00:08:45.788 "uuid": "c2a4c2b1-9537-50b5-9abd-18d7fb40aadf", 00:08:45.788 "assigned_rate_limits": { 00:08:45.788 "rw_ios_per_sec": 0, 00:08:45.788 "rw_mbytes_per_sec": 0, 00:08:45.788 "r_mbytes_per_sec": 0, 00:08:45.788 "w_mbytes_per_sec": 0 00:08:45.788 }, 00:08:45.788 "claimed": false, 00:08:45.788 "zoned": false, 00:08:45.788 "supported_io_types": { 00:08:45.788 "read": true, 00:08:45.788 "write": true, 00:08:45.788 "unmap": true, 00:08:45.788 "flush": true, 00:08:45.788 "reset": true, 00:08:45.788 "nvme_admin": false, 00:08:45.788 "nvme_io": false, 00:08:45.788 "nvme_io_md": false, 00:08:45.788 "write_zeroes": true, 00:08:45.788 "zcopy": true, 00:08:45.788 "get_zone_info": false, 00:08:45.788 "zone_management": false, 00:08:45.788 "zone_append": false, 00:08:45.788 "compare": false, 00:08:45.788 "compare_and_write": false, 00:08:45.788 "abort": true, 00:08:45.788 "seek_hole": false, 00:08:45.788 "seek_data": false, 00:08:45.788 "copy": true, 00:08:45.788 "nvme_iov_md": false 00:08:45.788 }, 00:08:45.788 "memory_domains": [ 00:08:45.788 { 00:08:45.788 "dma_device_id": "system", 00:08:45.788 "dma_device_type": 1 00:08:45.788 }, 00:08:45.788 { 00:08:45.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.788 "dma_device_type": 2 00:08:45.788 } 00:08:45.788 ], 00:08:45.788 "driver_specific": { 00:08:45.788 "passthru": { 00:08:45.788 "name": "Passthru0", 00:08:45.788 "base_bdev_name": "Malloc2" 00:08:45.788 } 00:08:45.788 } 00:08:45.788 } 00:08:45.788 ]' 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:45.788 00:08:45.788 real 0m0.298s 00:08:45.788 user 0m0.188s 00:08:45.788 sys 0m0.052s 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.788 10:18:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:45.788 ************************************ 00:08:45.788 END TEST rpc_daemon_integrity 00:08:45.788 ************************************ 00:08:45.788 10:18:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:45.788 10:18:58 rpc -- rpc/rpc.sh@84 -- # killprocess 3293132 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@950 -- # '[' -z 3293132 ']' 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@954 -- # kill -0 3293132 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@955 -- # uname 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3293132 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3293132' 00:08:45.788 killing process with pid 3293132 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@969 -- # kill 3293132 00:08:45.788 10:18:58 rpc -- common/autotest_common.sh@974 -- # wait 3293132 00:08:46.356 00:08:46.356 real 0m2.738s 00:08:46.356 user 0m3.462s 00:08:46.356 sys 0m0.913s 00:08:46.356 10:18:58 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.356 10:18:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.356 ************************************ 00:08:46.356 END TEST rpc 00:08:46.356 ************************************ 00:08:46.356 10:18:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:46.356 10:18:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.356 10:18:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.356 10:18:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.356 ************************************ 00:08:46.356 START TEST skip_rpc 00:08:46.356 ************************************ 00:08:46.356 10:18:59 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:46.356 * Looking for test storage... 00:08:46.357 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:08:46.357 10:18:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:46.357 10:18:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:46.357 10:18:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:46.357 10:18:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.357 10:18:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.357 10:18:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.357 ************************************ 00:08:46.357 START TEST skip_rpc 00:08:46.357 ************************************ 00:08:46.357 10:18:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:46.357 10:18:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3293767 00:08:46.357 10:18:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:46.357 10:18:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:46.357 10:18:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:46.616 [2024-07-26 10:18:59.282226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:46.616 [2024-07-26 10:18:59.282283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293767 ] 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.616 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.616 [2024-07-26 10:18:59.418896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.616 [2024-07-26 10:18:59.462400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3293767 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3293767 ']' 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3293767 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3293767 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3293767' 00:08:51.881 killing process with pid 3293767 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3293767 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3293767 00:08:51.881 00:08:51.881 real 0m5.392s 00:08:51.881 user 0m5.057s 00:08:51.881 sys 0m0.362s 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.881 10:19:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 ************************************ 00:08:51.881 END TEST skip_rpc 00:08:51.881 ************************************ 00:08:51.881 10:19:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:51.881 10:19:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.881 10:19:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.881 10:19:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.881 ************************************ 00:08:51.881 START TEST skip_rpc_with_json 00:08:51.881 ************************************ 00:08:51.881 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:51.881 10:19:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3294840 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3294840 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3294840 ']' 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.882 10:19:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:51.882 [2024-07-26 10:19:04.759558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:51.882 [2024-07-26 10:19:04.759616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294840 ] 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.140 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:52.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:52.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:52.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:52.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:52.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:52.141 [2024-07-26 10:19:04.892691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.141 [2024-07-26 10:19:04.937208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.075 [2024-07-26 10:19:05.933938] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:53.075 request: 00:08:53.075 { 00:08:53.075 "trtype": "tcp", 00:08:53.075 "method": "nvmf_get_transports", 00:08:53.075 "req_id": 1 00:08:53.075 } 00:08:53.075 Got JSON-RPC error response 00:08:53.075 response: 00:08:53.075 { 00:08:53.075 "code": -19, 00:08:53.075 "message": "No such device" 00:08:53.075 } 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.075 [2024-07-26 10:19:05.942069] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.075 10:19:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.334 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.334 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:53.334 { 00:08:53.334 "subsystems": [ 00:08:53.334 { 00:08:53.334 "subsystem": "keyring", 00:08:53.334 "config": [] 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "subsystem": "iobuf", 00:08:53.334 "config": [ 00:08:53.334 { 00:08:53.334 "method": "iobuf_set_options", 00:08:53.334 "params": { 00:08:53.334 "small_pool_count": 8192, 00:08:53.334 "large_pool_count": 1024, 00:08:53.334 "small_bufsize": 8192, 00:08:53.334 "large_bufsize": 135168 00:08:53.334 } 00:08:53.334 } 00:08:53.334 ] 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "subsystem": "sock", 00:08:53.334 "config": [ 00:08:53.334 { 00:08:53.334 "method": "sock_set_default_impl", 00:08:53.334 "params": { 00:08:53.334 "impl_name": "posix" 00:08:53.334 } 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "method": "sock_impl_set_options", 00:08:53.334 "params": { 00:08:53.334 "impl_name": "ssl", 00:08:53.334 "recv_buf_size": 4096, 00:08:53.334 "send_buf_size": 4096, 00:08:53.334 "enable_recv_pipe": true, 00:08:53.334 "enable_quickack": false, 00:08:53.334 "enable_placement_id": 0, 00:08:53.334 "enable_zerocopy_send_server": true, 00:08:53.334 "enable_zerocopy_send_client": false, 00:08:53.334 "zerocopy_threshold": 0, 00:08:53.334 "tls_version": 0, 00:08:53.334 "enable_ktls": false 00:08:53.334 } 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "method": "sock_impl_set_options", 00:08:53.334 "params": { 00:08:53.334 "impl_name": "posix", 00:08:53.334 "recv_buf_size": 2097152, 00:08:53.334 "send_buf_size": 2097152, 00:08:53.334 "enable_recv_pipe": true, 00:08:53.334 "enable_quickack": false, 00:08:53.334 "enable_placement_id": 0, 00:08:53.334 "enable_zerocopy_send_server": true, 00:08:53.334 "enable_zerocopy_send_client": false, 00:08:53.334 "zerocopy_threshold": 0, 00:08:53.334 "tls_version": 0, 00:08:53.334 "enable_ktls": false 00:08:53.334 } 00:08:53.334 } 00:08:53.334 ] 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "subsystem": "vmd", 00:08:53.334 "config": [] 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "subsystem": "accel", 00:08:53.334 "config": [ 00:08:53.334 { 00:08:53.334 "method": "accel_set_options", 00:08:53.334 "params": { 00:08:53.334 "small_cache_size": 128, 00:08:53.334 "large_cache_size": 16, 00:08:53.334 "task_count": 2048, 00:08:53.334 "sequence_count": 2048, 00:08:53.334 "buf_count": 2048 00:08:53.334 } 00:08:53.334 } 00:08:53.334 ] 00:08:53.334 }, 00:08:53.334 { 00:08:53.334 "subsystem": "bdev", 00:08:53.334 "config": [ 00:08:53.334 { 00:08:53.334 "method": "bdev_set_options", 00:08:53.334 "params": { 00:08:53.334 "bdev_io_pool_size": 65535, 00:08:53.335 "bdev_io_cache_size": 256, 00:08:53.335 "bdev_auto_examine": true, 00:08:53.335 "iobuf_small_cache_size": 128, 00:08:53.335 "iobuf_large_cache_size": 16 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "bdev_raid_set_options", 00:08:53.335 "params": { 00:08:53.335 "process_window_size_kb": 1024, 00:08:53.335 "process_max_bandwidth_mb_sec": 0 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "bdev_iscsi_set_options", 00:08:53.335 "params": { 00:08:53.335 "timeout_sec": 30 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "bdev_nvme_set_options", 00:08:53.335 "params": { 00:08:53.335 "action_on_timeout": "none", 00:08:53.335 "timeout_us": 0, 00:08:53.335 "timeout_admin_us": 0, 00:08:53.335 "keep_alive_timeout_ms": 10000, 00:08:53.335 "arbitration_burst": 0, 00:08:53.335 "low_priority_weight": 0, 00:08:53.335 "medium_priority_weight": 0, 00:08:53.335 "high_priority_weight": 0, 00:08:53.335 "nvme_adminq_poll_period_us": 10000, 00:08:53.335 "nvme_ioq_poll_period_us": 0, 00:08:53.335 "io_queue_requests": 0, 00:08:53.335 "delay_cmd_submit": true, 00:08:53.335 "transport_retry_count": 4, 00:08:53.335 "bdev_retry_count": 3, 00:08:53.335 "transport_ack_timeout": 0, 00:08:53.335 "ctrlr_loss_timeout_sec": 0, 00:08:53.335 "reconnect_delay_sec": 0, 00:08:53.335 "fast_io_fail_timeout_sec": 0, 00:08:53.335 "disable_auto_failback": false, 00:08:53.335 "generate_uuids": false, 00:08:53.335 "transport_tos": 0, 00:08:53.335 "nvme_error_stat": false, 00:08:53.335 "rdma_srq_size": 0, 00:08:53.335 "io_path_stat": false, 00:08:53.335 "allow_accel_sequence": false, 00:08:53.335 "rdma_max_cq_size": 0, 00:08:53.335 "rdma_cm_event_timeout_ms": 0, 00:08:53.335 "dhchap_digests": [ 00:08:53.335 "sha256", 00:08:53.335 "sha384", 00:08:53.335 "sha512" 00:08:53.335 ], 00:08:53.335 "dhchap_dhgroups": [ 00:08:53.335 "null", 00:08:53.335 "ffdhe2048", 00:08:53.335 "ffdhe3072", 00:08:53.335 "ffdhe4096", 00:08:53.335 "ffdhe6144", 00:08:53.335 "ffdhe8192" 00:08:53.335 ] 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "bdev_nvme_set_hotplug", 00:08:53.335 "params": { 00:08:53.335 "period_us": 100000, 00:08:53.335 "enable": false 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "bdev_wait_for_examine" 00:08:53.335 } 00:08:53.335 ] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "scsi", 00:08:53.335 "config": null 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "scheduler", 00:08:53.335 "config": [ 00:08:53.335 { 00:08:53.335 "method": "framework_set_scheduler", 00:08:53.335 "params": { 00:08:53.335 "name": "static" 00:08:53.335 } 00:08:53.335 } 00:08:53.335 ] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "vhost_scsi", 00:08:53.335 "config": [] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "vhost_blk", 00:08:53.335 "config": [] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "ublk", 00:08:53.335 "config": [] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "nbd", 00:08:53.335 "config": [] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "nvmf", 00:08:53.335 "config": [ 00:08:53.335 { 00:08:53.335 "method": "nvmf_set_config", 00:08:53.335 "params": { 00:08:53.335 "discovery_filter": "match_any", 00:08:53.335 "admin_cmd_passthru": { 00:08:53.335 "identify_ctrlr": false 00:08:53.335 } 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "nvmf_set_max_subsystems", 00:08:53.335 "params": { 00:08:53.335 "max_subsystems": 1024 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "nvmf_set_crdt", 00:08:53.335 "params": { 00:08:53.335 "crdt1": 0, 00:08:53.335 "crdt2": 0, 00:08:53.335 "crdt3": 0 00:08:53.335 } 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "method": "nvmf_create_transport", 00:08:53.335 "params": { 00:08:53.335 "trtype": "TCP", 00:08:53.335 "max_queue_depth": 128, 00:08:53.335 "max_io_qpairs_per_ctrlr": 127, 00:08:53.335 "in_capsule_data_size": 4096, 00:08:53.335 "max_io_size": 131072, 00:08:53.335 "io_unit_size": 131072, 00:08:53.335 "max_aq_depth": 128, 00:08:53.335 "num_shared_buffers": 511, 00:08:53.335 "buf_cache_size": 4294967295, 00:08:53.335 "dif_insert_or_strip": false, 00:08:53.335 "zcopy": false, 00:08:53.335 "c2h_success": true, 00:08:53.335 "sock_priority": 0, 00:08:53.335 "abort_timeout_sec": 1, 00:08:53.335 "ack_timeout": 0, 00:08:53.335 "data_wr_pool_size": 0 00:08:53.335 } 00:08:53.335 } 00:08:53.335 ] 00:08:53.335 }, 00:08:53.335 { 00:08:53.335 "subsystem": "iscsi", 00:08:53.335 "config": [ 00:08:53.335 { 00:08:53.335 "method": "iscsi_set_options", 00:08:53.335 "params": { 00:08:53.335 "node_base": "iqn.2016-06.io.spdk", 00:08:53.335 "max_sessions": 128, 00:08:53.335 "max_connections_per_session": 2, 00:08:53.335 "max_queue_depth": 64, 00:08:53.335 "default_time2wait": 2, 00:08:53.335 "default_time2retain": 20, 00:08:53.335 "first_burst_length": 8192, 00:08:53.335 "immediate_data": true, 00:08:53.335 "allow_duplicated_isid": false, 00:08:53.335 "error_recovery_level": 0, 00:08:53.335 "nop_timeout": 60, 00:08:53.335 "nop_in_interval": 30, 00:08:53.335 "disable_chap": false, 00:08:53.335 "require_chap": false, 00:08:53.335 "mutual_chap": false, 00:08:53.335 "chap_group": 0, 00:08:53.335 "max_large_datain_per_connection": 64, 00:08:53.335 "max_r2t_per_connection": 4, 00:08:53.335 "pdu_pool_size": 36864, 00:08:53.335 "immediate_data_pool_size": 16384, 00:08:53.335 "data_out_pool_size": 2048 00:08:53.335 } 00:08:53.335 } 00:08:53.335 ] 00:08:53.335 } 00:08:53.335 ] 00:08:53.335 } 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3294840 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3294840 ']' 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3294840 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294840 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294840' 00:08:53.335 killing process with pid 3294840 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3294840 00:08:53.335 10:19:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3294840 00:08:53.594 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3295120 00:08:53.594 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:53.594 10:19:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3295120 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3295120 ']' 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3295120 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3295120 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3295120' 00:08:58.861 killing process with pid 3295120 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3295120 00:08:58.861 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3295120 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:08:59.119 00:08:59.119 real 0m7.158s 00:08:59.119 user 0m7.039s 00:08:59.119 sys 0m0.921s 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:59.119 ************************************ 00:08:59.119 END TEST skip_rpc_with_json 00:08:59.119 ************************************ 00:08:59.119 10:19:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:59.119 10:19:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.119 10:19:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.119 10:19:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.119 ************************************ 00:08:59.119 START TEST skip_rpc_with_delay 00:08:59.119 ************************************ 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.119 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.120 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.120 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:59.120 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:59.120 10:19:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:59.378 [2024-07-26 10:19:12.061064] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:59.378 [2024-07-26 10:19:12.061261] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.378 00:08:59.378 real 0m0.165s 00:08:59.378 user 0m0.105s 00:08:59.378 sys 0m0.057s 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.378 10:19:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:59.378 ************************************ 00:08:59.378 END TEST skip_rpc_with_delay 00:08:59.378 ************************************ 00:08:59.378 10:19:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:59.378 10:19:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:59.378 10:19:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:59.378 10:19:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:59.378 10:19:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.378 10:19:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.378 ************************************ 00:08:59.378 START TEST exit_on_failed_rpc_init 00:08:59.378 ************************************ 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3296178 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3296178 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3296178 ']' 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.378 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.379 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.379 10:19:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:59.379 10:19:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.637 [2024-07-26 10:19:12.319216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:08:59.637 [2024-07-26 10:19:12.319345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296178 ] 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:59.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.637 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:59.637 [2024-07-26 10:19:12.533672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.896 [2024-07-26 10:19:12.577550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:00.829 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:00.829 [2024-07-26 10:19:13.504099] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:00.829 [2024-07-26 10:19:13.504181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296306 ] 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:00.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.829 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:00.829 [2024-07-26 10:19:13.627105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.829 [2024-07-26 10:19:13.670066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.829 [2024-07-26 10:19:13.670174] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:00.829 [2024-07-26 10:19:13.670190] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:00.829 [2024-07-26 10:19:13.670201] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3296178 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3296178 ']' 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3296178 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3296178 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3296178' 00:09:01.088 killing process with pid 3296178 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3296178 00:09:01.088 10:19:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3296178 00:09:01.346 00:09:01.346 real 0m1.917s 00:09:01.346 user 0m2.286s 00:09:01.346 sys 0m0.690s 00:09:01.346 10:19:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.346 10:19:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:01.346 ************************************ 00:09:01.346 END TEST exit_on_failed_rpc_init 00:09:01.346 ************************************ 00:09:01.346 10:19:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:09:01.346 00:09:01.346 real 0m15.075s 00:09:01.346 user 0m14.650s 00:09:01.346 sys 0m2.346s 00:09:01.346 10:19:14 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.346 10:19:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.346 ************************************ 00:09:01.346 END TEST skip_rpc 00:09:01.346 ************************************ 00:09:01.346 10:19:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:01.346 10:19:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.346 10:19:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.346 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.346 ************************************ 00:09:01.346 START TEST rpc_client 00:09:01.346 ************************************ 00:09:01.346 10:19:14 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:01.604 * Looking for test storage... 00:09:01.604 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:09:01.604 10:19:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:01.604 OK 00:09:01.604 10:19:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:01.604 00:09:01.604 real 0m0.142s 00:09:01.604 user 0m0.051s 00:09:01.604 sys 0m0.101s 00:09:01.604 10:19:14 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.604 10:19:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:01.604 ************************************ 00:09:01.604 END TEST rpc_client 00:09:01.604 ************************************ 00:09:01.604 10:19:14 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:09:01.604 10:19:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.604 10:19:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.604 10:19:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.604 ************************************ 00:09:01.604 START TEST json_config 00:09:01.604 ************************************ 00:09:01.604 10:19:14 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:09:01.863 10:19:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.863 10:19:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:09:01.864 10:19:14 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.864 10:19:14 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.864 10:19:14 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.864 10:19:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.864 10:19:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.864 10:19:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.864 10:19:14 json_config -- paths/export.sh@5 -- # export PATH 00:09:01.864 10:19:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@47 -- # : 0 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.864 10:19:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:09:01.864 INFO: JSON configuration test init 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.864 10:19:14 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:09:01.864 10:19:14 json_config -- json_config/common.sh@9 -- # local app=target 00:09:01.864 10:19:14 json_config -- json_config/common.sh@10 -- # shift 00:09:01.864 10:19:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:01.864 10:19:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:01.864 10:19:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:01.864 10:19:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.864 10:19:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.864 10:19:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3296612 00:09:01.864 10:19:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:01.864 Waiting for target to run... 00:09:01.864 10:19:14 json_config -- json_config/common.sh@25 -- # waitforlisten 3296612 /var/tmp/spdk_tgt.sock 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 3296612 ']' 00:09:01.864 10:19:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:01.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.864 10:19:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:01.864 [2024-07-26 10:19:14.660175] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:01.864 [2024-07-26 10:19:14.660237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296612 ] 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:02.483 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.483 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:02.483 [2024-07-26 10:19:15.169814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.483 [2024-07-26 10:19:15.203348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.741 10:19:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.741 10:19:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:02.741 10:19:15 json_config -- json_config/common.sh@26 -- # echo '' 00:09:02.741 00:09:02.741 10:19:15 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:09:02.741 10:19:15 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:09:02.741 10:19:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:02.741 10:19:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.741 10:19:15 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:09:02.741 10:19:15 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:09:02.741 10:19:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:09:02.998 10:19:15 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:09:02.998 10:19:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:09:03.256 [2024-07-26 10:19:16.001751] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:09:03.256 10:19:16 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:09:03.256 10:19:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:09:03.514 [2024-07-26 10:19:16.230334] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:09:03.514 10:19:16 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:09:03.514 10:19:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.514 10:19:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.514 10:19:16 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:03.514 10:19:16 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:09:03.514 10:19:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:03.772 [2024-07-26 10:19:16.523334] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:09:09.039 10:19:21 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:09:09.039 10:19:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:09.039 10:19:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.039 10:19:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:09.040 10:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@51 -- # sort 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:09:09.040 10:19:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.040 10:19:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@59 -- # return 0 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:09:09.040 10:19:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.040 10:19:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:09.040 10:19:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:09.040 10:19:21 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:09:09.298 10:19:22 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:09.298 10:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:09.556 Nvme0n1p0 Nvme0n1p1 00:09:09.556 10:19:22 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:09.556 10:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:09.815 [2024-07-26 10:19:22.602540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:09.815 [2024-07-26 10:19:22.602592] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:09.815 00:09:09.815 10:19:22 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:09.815 10:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:10.073 Malloc3 00:09:10.073 10:19:22 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:10.073 10:19:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:10.332 [2024-07-26 10:19:23.055952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:10.332 [2024-07-26 10:19:23.055999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.332 [2024-07-26 10:19:23.056022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x124bfa0 00:09:10.332 [2024-07-26 10:19:23.056034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.332 [2024-07-26 10:19:23.057392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.332 [2024-07-26 10:19:23.057420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:10.332 PTBdevFromMalloc3 00:09:10.332 10:19:23 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:10.332 10:19:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:10.590 Null0 00:09:10.590 10:19:23 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:10.590 10:19:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:10.848 Malloc0 00:09:10.848 10:19:23 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:10.848 10:19:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:10.848 Malloc1 00:09:10.848 10:19:23 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:10.848 10:19:23 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:11.414 102400+0 records in 00:09:11.414 102400+0 records out 00:09:11.414 104857600 bytes (105 MB, 100 MiB) copied, 0.278814 s, 376 MB/s 00:09:11.414 10:19:24 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:11.414 10:19:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:11.414 aio_disk 00:09:11.414 10:19:24 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:11.414 10:19:24 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:11.414 10:19:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:15.598 5e430732-236a-46c4-afb2-29f8ec8aef53 00:09:15.598 10:19:28 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:15.598 10:19:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:15.598 10:19:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:15.856 10:19:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:15.856 10:19:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:16.115 10:19:28 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:16.115 10:19:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:16.373 10:19:29 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:16.373 10:19:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:16.631 10:19:29 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:09:16.632 10:19:29 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:09:16.632 10:19:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:09:16.632 MallocForCryptoBdev 00:09:16.890 10:19:29 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:09:16.890 10:19:29 json_config -- json_config/json_config.sh@163 -- # wc -l 00:09:16.890 10:19:29 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:09:16.890 10:19:29 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:09:16.890 10:19:29 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:09:16.890 10:19:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:09:16.890 [2024-07-26 10:19:29.779576] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:09:16.890 CryptoMallocBdev 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@75 -- # sort 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@76 -- # sort 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:17.149 10:19:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:17.149 10:19:29 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:17.149 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:17.149 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.149 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.149 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.150 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e bdev_register:aio_disk bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\4\2\3\3\7\e\c\-\d\b\3\6\-\4\4\7\e\-\8\6\3\1\-\2\f\b\f\a\7\e\e\b\b\2\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\2\c\2\5\8\9\c\-\c\c\f\9\-\4\2\8\f\-\8\e\4\6\-\d\3\9\9\6\5\4\5\9\a\9\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\5\d\3\0\d\7\6\-\9\2\b\1\-\4\8\9\b\-\a\1\2\6\-\e\1\4\f\d\1\3\0\0\f\1\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\6\4\3\6\5\d\8\-\2\6\c\1\-\4\b\8\d\-\8\d\c\4\-\d\7\d\e\e\1\4\a\b\c\6\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@90 -- # cat 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e bdev_register:aio_disk bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:09:17.410 Expected events matched: 00:09:17.410 bdev_register:642337ec-db36-447e-8631-2fbfa7eebb24 00:09:17.410 bdev_register:82c2589c-ccf9-428f-8e46-d39965459a9e 00:09:17.410 bdev_register:aio_disk 00:09:17.410 bdev_register:b5d30d76-92b1-489b-a126-e14fd1300f15 00:09:17.410 bdev_register:b64365d8-26c1-4b8d-8dc4-d7dee14abc63 00:09:17.410 bdev_register:CryptoMallocBdev 00:09:17.410 bdev_register:Malloc0 00:09:17.410 bdev_register:Malloc0p0 00:09:17.410 bdev_register:Malloc0p1 00:09:17.410 bdev_register:Malloc0p2 00:09:17.410 bdev_register:Malloc1 00:09:17.410 bdev_register:Malloc3 00:09:17.410 bdev_register:MallocForCryptoBdev 00:09:17.410 bdev_register:Null0 00:09:17.410 bdev_register:Nvme0n1 00:09:17.410 bdev_register:Nvme0n1p0 00:09:17.410 bdev_register:Nvme0n1p1 00:09:17.410 bdev_register:PTBdevFromMalloc3 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:09:17.410 10:19:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.410 10:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:09:17.410 10:19:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.410 10:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:09:17.410 10:19:30 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:17.410 10:19:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:17.669 MallocBdevForConfigChangeCheck 00:09:17.669 10:19:30 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:09:17.669 10:19:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.669 10:19:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.669 10:19:30 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:09:17.669 10:19:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:17.928 10:19:30 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:09:17.928 INFO: shutting down applications... 00:09:17.928 10:19:30 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:09:17.928 10:19:30 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:09:17.928 10:19:30 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:09:17.928 10:19:30 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:18.187 [2024-07-26 10:19:30.963184] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:20.722 Calling clear_iscsi_subsystem 00:09:20.722 Calling clear_nvmf_subsystem 00:09:20.722 Calling clear_nbd_subsystem 00:09:20.722 Calling clear_ublk_subsystem 00:09:20.722 Calling clear_vhost_blk_subsystem 00:09:20.722 Calling clear_vhost_scsi_subsystem 00:09:20.722 Calling clear_bdev_subsystem 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@347 -- # count=100 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:20.722 10:19:33 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:21.050 10:19:33 json_config -- json_config/json_config.sh@349 -- # break 00:09:21.050 10:19:33 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:09:21.050 10:19:33 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:09:21.050 10:19:33 json_config -- json_config/common.sh@31 -- # local app=target 00:09:21.050 10:19:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:21.051 10:19:33 json_config -- json_config/common.sh@35 -- # [[ -n 3296612 ]] 00:09:21.051 10:19:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3296612 00:09:21.051 10:19:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:21.051 10:19:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:21.051 10:19:33 json_config -- json_config/common.sh@41 -- # kill -0 3296612 00:09:21.051 10:19:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:21.618 10:19:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:21.618 10:19:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:21.618 10:19:34 json_config -- json_config/common.sh@41 -- # kill -0 3296612 00:09:21.618 10:19:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:21.618 10:19:34 json_config -- json_config/common.sh@43 -- # break 00:09:21.618 10:19:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:21.618 10:19:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:21.618 SPDK target shutdown done 00:09:21.618 10:19:34 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:09:21.618 INFO: relaunching applications... 00:09:21.618 10:19:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:21.618 10:19:34 json_config -- json_config/common.sh@9 -- # local app=target 00:09:21.618 10:19:34 json_config -- json_config/common.sh@10 -- # shift 00:09:21.618 10:19:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:21.618 10:19:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:21.618 10:19:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:21.618 10:19:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.618 10:19:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.618 10:19:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3300199 00:09:21.618 10:19:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:21.618 Waiting for target to run... 00:09:21.618 10:19:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:21.618 10:19:34 json_config -- json_config/common.sh@25 -- # waitforlisten 3300199 /var/tmp/spdk_tgt.sock 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 3300199 ']' 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.618 10:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.618 [2024-07-26 10:19:34.491980] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:21.618 [2024-07-26 10:19:34.492043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300199 ] 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:22.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.187 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:22.187 [2024-07-26 10:19:35.007241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.187 [2024-07-26 10:19:35.037714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.447 [2024-07-26 10:19:35.091749] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:09:22.447 [2024-07-26 10:19:35.099783] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:09:22.447 [2024-07-26 10:19:35.107801] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:09:22.447 [2024-07-26 10:19:35.188612] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:09:24.985 [2024-07-26 10:19:37.465283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.985 [2024-07-26 10:19:37.465336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:24.985 [2024-07-26 10:19:37.465349] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:24.985 [2024-07-26 10:19:37.473300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:24.985 [2024-07-26 10:19:37.473331] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:24.985 [2024-07-26 10:19:37.481313] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:24.985 [2024-07-26 10:19:37.481334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:24.985 [2024-07-26 10:19:37.489348] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:09:24.985 [2024-07-26 10:19:37.489373] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:09:24.985 [2024-07-26 10:19:37.489384] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:27.520 [2024-07-26 10:19:40.382595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:27.520 [2024-07-26 10:19:40.382641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.520 [2024-07-26 10:19:40.382658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3588450 00:09:27.520 [2024-07-26 10:19:40.382669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.520 [2024-07-26 10:19:40.382940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.520 [2024-07-26 10:19:40.382956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:27.779 10:19:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.779 10:19:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:27.779 10:19:40 json_config -- json_config/common.sh@26 -- # echo '' 00:09:27.779 00:09:27.779 10:19:40 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:09:27.779 10:19:40 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:27.779 INFO: Checking if target configuration is the same... 00:09:27.779 10:19:40 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:27.779 10:19:40 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:09:27.779 10:19:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:27.779 + '[' 2 -ne 2 ']' 00:09:27.779 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:27.779 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:09:27.779 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:27.779 +++ basename /dev/fd/62 00:09:27.779 ++ mktemp /tmp/62.XXX 00:09:27.779 + tmp_file_1=/tmp/62.e3n 00:09:27.779 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:27.779 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:27.779 + tmp_file_2=/tmp/spdk_tgt_config.json.6lK 00:09:27.779 + ret=0 00:09:27.779 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:28.345 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:28.345 + diff -u /tmp/62.e3n /tmp/spdk_tgt_config.json.6lK 00:09:28.345 + echo 'INFO: JSON config files are the same' 00:09:28.345 INFO: JSON config files are the same 00:09:28.345 + rm /tmp/62.e3n /tmp/spdk_tgt_config.json.6lK 00:09:28.345 + exit 0 00:09:28.345 10:19:41 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:09:28.345 10:19:41 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:28.345 INFO: changing configuration and checking if this can be detected... 00:09:28.345 10:19:41 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:28.345 10:19:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:28.345 10:19:41 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:28.345 10:19:41 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:09:28.345 10:19:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:28.345 + '[' 2 -ne 2 ']' 00:09:28.345 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:28.345 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:09:28.345 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:28.345 +++ basename /dev/fd/62 00:09:28.603 ++ mktemp /tmp/62.XXX 00:09:28.603 + tmp_file_1=/tmp/62.dEJ 00:09:28.603 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:28.603 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:28.603 + tmp_file_2=/tmp/spdk_tgt_config.json.v0Z 00:09:28.603 + ret=0 00:09:28.603 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:28.862 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:28.862 + diff -u /tmp/62.dEJ /tmp/spdk_tgt_config.json.v0Z 00:09:28.862 + ret=1 00:09:28.862 + echo '=== Start of file: /tmp/62.dEJ ===' 00:09:28.862 + cat /tmp/62.dEJ 00:09:28.862 + echo '=== End of file: /tmp/62.dEJ ===' 00:09:28.862 + echo '' 00:09:28.862 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v0Z ===' 00:09:28.862 + cat /tmp/spdk_tgt_config.json.v0Z 00:09:28.862 + echo '=== End of file: /tmp/spdk_tgt_config.json.v0Z ===' 00:09:28.862 + echo '' 00:09:28.862 + rm /tmp/62.dEJ /tmp/spdk_tgt_config.json.v0Z 00:09:28.862 + exit 1 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:09:28.862 INFO: configuration change detected. 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:09:28.862 10:19:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.862 10:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@321 -- # [[ -n 3300199 ]] 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:09:28.862 10:19:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.862 10:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:09:28.862 10:19:41 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:28.862 10:19:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:29.121 10:19:41 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:29.121 10:19:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:29.380 10:19:42 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:29.380 10:19:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:29.638 10:19:42 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:29.638 10:19:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@197 -- # uname -s 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 10:19:42 json_config -- json_config/json_config.sh@327 -- # killprocess 3300199 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 3300199 ']' 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@954 -- # kill -0 3300199 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@955 -- # uname 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3300199 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3300199' 00:09:29.896 killing process with pid 3300199 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@969 -- # kill 3300199 00:09:29.896 10:19:42 json_config -- common/autotest_common.sh@974 -- # wait 3300199 00:09:32.428 10:19:45 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:09:32.428 10:19:45 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:32.428 10:19:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.429 10:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:32.429 10:19:45 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:32.429 10:19:45 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:32.429 INFO: Success 00:09:32.429 00:09:32.429 real 0m30.760s 00:09:32.429 user 0m34.931s 00:09:32.429 sys 0m4.126s 00:09:32.429 10:19:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.429 10:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:32.429 ************************************ 00:09:32.429 END TEST json_config 00:09:32.429 ************************************ 00:09:32.429 10:19:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:32.429 10:19:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.429 10:19:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.429 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:32.429 ************************************ 00:09:32.429 START TEST json_config_extra_key 00:09:32.429 ************************************ 00:09:32.429 10:19:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:32.688 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.688 10:19:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:09:32.688 10:19:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.688 10:19:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.688 10:19:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.688 10:19:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.688 10:19:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.688 10:19:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.688 10:19:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:32.689 10:19:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.689 10:19:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:32.689 INFO: launching applications... 00:09:32.689 10:19:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3302169 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:32.689 Waiting for target to run... 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3302169 /var/tmp/spdk_tgt.sock 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3302169 ']' 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:32.689 10:19:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:32.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.689 10:19:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 [2024-07-26 10:19:45.483461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:32.689 [2024-07-26 10:19:45.483523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302169 ] 00:09:32.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:32.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.949 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:32.949 [2024-07-26 10:19:45.848604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.208 [2024-07-26 10:19:45.875556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.776 10:19:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.776 10:19:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:33.776 00:09:33.776 10:19:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:33.776 INFO: shutting down applications... 00:09:33.776 10:19:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3302169 ]] 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3302169 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3302169 00:09:33.776 10:19:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3302169 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:34.035 10:19:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:34.035 SPDK target shutdown done 00:09:34.035 10:19:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:34.035 Success 00:09:34.035 00:09:34.035 real 0m1.577s 00:09:34.035 user 0m1.190s 00:09:34.035 sys 0m0.493s 00:09:34.035 10:19:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.035 10:19:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:34.035 ************************************ 00:09:34.035 END TEST json_config_extra_key 00:09:34.035 ************************************ 00:09:34.035 10:19:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:34.035 10:19:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.035 10:19:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.035 10:19:46 -- common/autotest_common.sh@10 -- # set +x 00:09:34.294 ************************************ 00:09:34.294 START TEST alias_rpc 00:09:34.294 ************************************ 00:09:34.294 10:19:46 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:34.294 * Looking for test storage... 00:09:34.294 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:09:34.294 10:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:34.294 10:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3302482 00:09:34.294 10:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:34.294 10:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3302482 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3302482 ']' 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.294 10:19:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.294 [2024-07-26 10:19:47.124348] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:34.294 [2024-07-26 10:19:47.124402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302482 ] 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.294 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:34.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:34.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.295 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:34.553 [2024-07-26 10:19:47.247585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.553 [2024-07-26 10:19:47.292500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.490 10:19:48 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.490 10:19:48 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:35.490 10:19:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:35.749 10:19:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3302482 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3302482 ']' 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3302482 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3302482 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3302482' 00:09:35.749 killing process with pid 3302482 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@969 -- # kill 3302482 00:09:35.749 10:19:48 alias_rpc -- common/autotest_common.sh@974 -- # wait 3302482 00:09:36.317 00:09:36.317 real 0m1.961s 00:09:36.317 user 0m2.367s 00:09:36.317 sys 0m0.552s 00:09:36.317 10:19:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.317 10:19:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 ************************************ 00:09:36.317 END TEST alias_rpc 00:09:36.317 ************************************ 00:09:36.317 10:19:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:36.317 10:19:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:36.317 10:19:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.317 10:19:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.317 10:19:48 -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 ************************************ 00:09:36.317 START TEST spdkcli_tcp 00:09:36.317 ************************************ 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:36.317 * Looking for test storage... 00:09:36.317 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3303020 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3303020 00:09:36.317 10:19:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3303020 ']' 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.317 10:19:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.317 [2024-07-26 10:19:49.198766] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:36.317 [2024-07-26 10:19:49.198830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303020 ] 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.576 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:36.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:36.577 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.577 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:36.577 [2024-07-26 10:19:49.331155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.577 [2024-07-26 10:19:49.377357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.577 [2024-07-26 10:19:49.377362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.526 10:19:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.526 10:19:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:09:37.526 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3303071 00:09:37.526 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:37.527 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:37.527 [ 00:09:37.527 "bdev_malloc_delete", 00:09:37.527 "bdev_malloc_create", 00:09:37.527 "bdev_null_resize", 00:09:37.527 "bdev_null_delete", 00:09:37.527 "bdev_null_create", 00:09:37.527 "bdev_nvme_cuse_unregister", 00:09:37.527 "bdev_nvme_cuse_register", 00:09:37.527 "bdev_opal_new_user", 00:09:37.527 "bdev_opal_set_lock_state", 00:09:37.527 "bdev_opal_delete", 00:09:37.527 "bdev_opal_get_info", 00:09:37.527 "bdev_opal_create", 00:09:37.527 "bdev_nvme_opal_revert", 00:09:37.527 "bdev_nvme_opal_init", 00:09:37.527 "bdev_nvme_send_cmd", 00:09:37.527 "bdev_nvme_get_path_iostat", 00:09:37.527 "bdev_nvme_get_mdns_discovery_info", 00:09:37.527 "bdev_nvme_stop_mdns_discovery", 00:09:37.527 "bdev_nvme_start_mdns_discovery", 00:09:37.527 "bdev_nvme_set_multipath_policy", 00:09:37.527 "bdev_nvme_set_preferred_path", 00:09:37.527 "bdev_nvme_get_io_paths", 00:09:37.527 "bdev_nvme_remove_error_injection", 00:09:37.527 "bdev_nvme_add_error_injection", 00:09:37.527 "bdev_nvme_get_discovery_info", 00:09:37.527 "bdev_nvme_stop_discovery", 00:09:37.527 "bdev_nvme_start_discovery", 00:09:37.527 "bdev_nvme_get_controller_health_info", 00:09:37.527 "bdev_nvme_disable_controller", 00:09:37.527 "bdev_nvme_enable_controller", 00:09:37.527 "bdev_nvme_reset_controller", 00:09:37.527 "bdev_nvme_get_transport_statistics", 00:09:37.527 "bdev_nvme_apply_firmware", 00:09:37.527 "bdev_nvme_detach_controller", 00:09:37.527 "bdev_nvme_get_controllers", 00:09:37.527 "bdev_nvme_attach_controller", 00:09:37.527 "bdev_nvme_set_hotplug", 00:09:37.527 "bdev_nvme_set_options", 00:09:37.527 "bdev_passthru_delete", 00:09:37.527 "bdev_passthru_create", 00:09:37.527 "bdev_lvol_set_parent_bdev", 00:09:37.527 "bdev_lvol_set_parent", 00:09:37.527 "bdev_lvol_check_shallow_copy", 00:09:37.527 "bdev_lvol_start_shallow_copy", 00:09:37.527 "bdev_lvol_grow_lvstore", 00:09:37.527 "bdev_lvol_get_lvols", 00:09:37.527 "bdev_lvol_get_lvstores", 00:09:37.527 "bdev_lvol_delete", 00:09:37.527 "bdev_lvol_set_read_only", 00:09:37.527 "bdev_lvol_resize", 00:09:37.527 "bdev_lvol_decouple_parent", 00:09:37.527 "bdev_lvol_inflate", 00:09:37.527 "bdev_lvol_rename", 00:09:37.527 "bdev_lvol_clone_bdev", 00:09:37.527 "bdev_lvol_clone", 00:09:37.527 "bdev_lvol_snapshot", 00:09:37.527 "bdev_lvol_create", 00:09:37.527 "bdev_lvol_delete_lvstore", 00:09:37.527 "bdev_lvol_rename_lvstore", 00:09:37.527 "bdev_lvol_create_lvstore", 00:09:37.527 "bdev_raid_set_options", 00:09:37.527 "bdev_raid_remove_base_bdev", 00:09:37.527 "bdev_raid_add_base_bdev", 00:09:37.527 "bdev_raid_delete", 00:09:37.527 "bdev_raid_create", 00:09:37.527 "bdev_raid_get_bdevs", 00:09:37.527 "bdev_error_inject_error", 00:09:37.527 "bdev_error_delete", 00:09:37.527 "bdev_error_create", 00:09:37.527 "bdev_split_delete", 00:09:37.527 "bdev_split_create", 00:09:37.527 "bdev_delay_delete", 00:09:37.527 "bdev_delay_create", 00:09:37.527 "bdev_delay_update_latency", 00:09:37.527 "bdev_zone_block_delete", 00:09:37.527 "bdev_zone_block_create", 00:09:37.527 "blobfs_create", 00:09:37.527 "blobfs_detect", 00:09:37.527 "blobfs_set_cache_size", 00:09:37.527 "bdev_crypto_delete", 00:09:37.527 "bdev_crypto_create", 00:09:37.527 "bdev_compress_delete", 00:09:37.527 "bdev_compress_create", 00:09:37.527 "bdev_compress_get_orphans", 00:09:37.527 "bdev_aio_delete", 00:09:37.527 "bdev_aio_rescan", 00:09:37.527 "bdev_aio_create", 00:09:37.527 "bdev_ftl_set_property", 00:09:37.527 "bdev_ftl_get_properties", 00:09:37.527 "bdev_ftl_get_stats", 00:09:37.527 "bdev_ftl_unmap", 00:09:37.527 "bdev_ftl_unload", 00:09:37.527 "bdev_ftl_delete", 00:09:37.527 "bdev_ftl_load", 00:09:37.527 "bdev_ftl_create", 00:09:37.527 "bdev_virtio_attach_controller", 00:09:37.527 "bdev_virtio_scsi_get_devices", 00:09:37.527 "bdev_virtio_detach_controller", 00:09:37.527 "bdev_virtio_blk_set_hotplug", 00:09:37.527 "bdev_iscsi_delete", 00:09:37.527 "bdev_iscsi_create", 00:09:37.527 "bdev_iscsi_set_options", 00:09:37.527 "accel_error_inject_error", 00:09:37.527 "ioat_scan_accel_module", 00:09:37.527 "dsa_scan_accel_module", 00:09:37.527 "iaa_scan_accel_module", 00:09:37.527 "dpdk_cryptodev_get_driver", 00:09:37.527 "dpdk_cryptodev_set_driver", 00:09:37.527 "dpdk_cryptodev_scan_accel_module", 00:09:37.527 "compressdev_scan_accel_module", 00:09:37.527 "keyring_file_remove_key", 00:09:37.527 "keyring_file_add_key", 00:09:37.527 "keyring_linux_set_options", 00:09:37.527 "iscsi_get_histogram", 00:09:37.527 "iscsi_enable_histogram", 00:09:37.527 "iscsi_set_options", 00:09:37.527 "iscsi_get_auth_groups", 00:09:37.527 "iscsi_auth_group_remove_secret", 00:09:37.527 "iscsi_auth_group_add_secret", 00:09:37.527 "iscsi_delete_auth_group", 00:09:37.527 "iscsi_create_auth_group", 00:09:37.527 "iscsi_set_discovery_auth", 00:09:37.527 "iscsi_get_options", 00:09:37.527 "iscsi_target_node_request_logout", 00:09:37.527 "iscsi_target_node_set_redirect", 00:09:37.527 "iscsi_target_node_set_auth", 00:09:37.527 "iscsi_target_node_add_lun", 00:09:37.527 "iscsi_get_stats", 00:09:37.527 "iscsi_get_connections", 00:09:37.527 "iscsi_portal_group_set_auth", 00:09:37.527 "iscsi_start_portal_group", 00:09:37.527 "iscsi_delete_portal_group", 00:09:37.527 "iscsi_create_portal_group", 00:09:37.527 "iscsi_get_portal_groups", 00:09:37.527 "iscsi_delete_target_node", 00:09:37.527 "iscsi_target_node_remove_pg_ig_maps", 00:09:37.527 "iscsi_target_node_add_pg_ig_maps", 00:09:37.527 "iscsi_create_target_node", 00:09:37.527 "iscsi_get_target_nodes", 00:09:37.527 "iscsi_delete_initiator_group", 00:09:37.527 "iscsi_initiator_group_remove_initiators", 00:09:37.527 "iscsi_initiator_group_add_initiators", 00:09:37.527 "iscsi_create_initiator_group", 00:09:37.527 "iscsi_get_initiator_groups", 00:09:37.527 "nvmf_set_crdt", 00:09:37.527 "nvmf_set_config", 00:09:37.527 "nvmf_set_max_subsystems", 00:09:37.527 "nvmf_stop_mdns_prr", 00:09:37.527 "nvmf_publish_mdns_prr", 00:09:37.527 "nvmf_subsystem_get_listeners", 00:09:37.527 "nvmf_subsystem_get_qpairs", 00:09:37.527 "nvmf_subsystem_get_controllers", 00:09:37.527 "nvmf_get_stats", 00:09:37.527 "nvmf_get_transports", 00:09:37.527 "nvmf_create_transport", 00:09:37.527 "nvmf_get_targets", 00:09:37.527 "nvmf_delete_target", 00:09:37.527 "nvmf_create_target", 00:09:37.527 "nvmf_subsystem_allow_any_host", 00:09:37.527 "nvmf_subsystem_remove_host", 00:09:37.527 "nvmf_subsystem_add_host", 00:09:37.527 "nvmf_ns_remove_host", 00:09:37.527 "nvmf_ns_add_host", 00:09:37.527 "nvmf_subsystem_remove_ns", 00:09:37.527 "nvmf_subsystem_add_ns", 00:09:37.527 "nvmf_subsystem_listener_set_ana_state", 00:09:37.527 "nvmf_discovery_get_referrals", 00:09:37.527 "nvmf_discovery_remove_referral", 00:09:37.527 "nvmf_discovery_add_referral", 00:09:37.527 "nvmf_subsystem_remove_listener", 00:09:37.527 "nvmf_subsystem_add_listener", 00:09:37.527 "nvmf_delete_subsystem", 00:09:37.527 "nvmf_create_subsystem", 00:09:37.527 "nvmf_get_subsystems", 00:09:37.527 "env_dpdk_get_mem_stats", 00:09:37.527 "nbd_get_disks", 00:09:37.527 "nbd_stop_disk", 00:09:37.527 "nbd_start_disk", 00:09:37.527 "ublk_recover_disk", 00:09:37.527 "ublk_get_disks", 00:09:37.527 "ublk_stop_disk", 00:09:37.527 "ublk_start_disk", 00:09:37.527 "ublk_destroy_target", 00:09:37.527 "ublk_create_target", 00:09:37.527 "virtio_blk_create_transport", 00:09:37.527 "virtio_blk_get_transports", 00:09:37.527 "vhost_controller_set_coalescing", 00:09:37.527 "vhost_get_controllers", 00:09:37.527 "vhost_delete_controller", 00:09:37.527 "vhost_create_blk_controller", 00:09:37.527 "vhost_scsi_controller_remove_target", 00:09:37.527 "vhost_scsi_controller_add_target", 00:09:37.527 "vhost_start_scsi_controller", 00:09:37.527 "vhost_create_scsi_controller", 00:09:37.527 "thread_set_cpumask", 00:09:37.527 "framework_get_governor", 00:09:37.527 "framework_get_scheduler", 00:09:37.527 "framework_set_scheduler", 00:09:37.527 "framework_get_reactors", 00:09:37.527 "thread_get_io_channels", 00:09:37.527 "thread_get_pollers", 00:09:37.527 "thread_get_stats", 00:09:37.527 "framework_monitor_context_switch", 00:09:37.527 "spdk_kill_instance", 00:09:37.527 "log_enable_timestamps", 00:09:37.527 "log_get_flags", 00:09:37.527 "log_clear_flag", 00:09:37.527 "log_set_flag", 00:09:37.527 "log_get_level", 00:09:37.527 "log_set_level", 00:09:37.527 "log_get_print_level", 00:09:37.527 "log_set_print_level", 00:09:37.527 "framework_enable_cpumask_locks", 00:09:37.527 "framework_disable_cpumask_locks", 00:09:37.527 "framework_wait_init", 00:09:37.527 "framework_start_init", 00:09:37.527 "scsi_get_devices", 00:09:37.527 "bdev_get_histogram", 00:09:37.527 "bdev_enable_histogram", 00:09:37.527 "bdev_set_qos_limit", 00:09:37.527 "bdev_set_qd_sampling_period", 00:09:37.527 "bdev_get_bdevs", 00:09:37.527 "bdev_reset_iostat", 00:09:37.527 "bdev_get_iostat", 00:09:37.527 "bdev_examine", 00:09:37.527 "bdev_wait_for_examine", 00:09:37.527 "bdev_set_options", 00:09:37.527 "notify_get_notifications", 00:09:37.527 "notify_get_types", 00:09:37.527 "accel_get_stats", 00:09:37.527 "accel_set_options", 00:09:37.527 "accel_set_driver", 00:09:37.527 "accel_crypto_key_destroy", 00:09:37.527 "accel_crypto_keys_get", 00:09:37.528 "accel_crypto_key_create", 00:09:37.528 "accel_assign_opc", 00:09:37.528 "accel_get_module_info", 00:09:37.528 "accel_get_opc_assignments", 00:09:37.528 "vmd_rescan", 00:09:37.528 "vmd_remove_device", 00:09:37.528 "vmd_enable", 00:09:37.528 "sock_get_default_impl", 00:09:37.528 "sock_set_default_impl", 00:09:37.528 "sock_impl_set_options", 00:09:37.528 "sock_impl_get_options", 00:09:37.528 "iobuf_get_stats", 00:09:37.528 "iobuf_set_options", 00:09:37.528 "framework_get_pci_devices", 00:09:37.528 "framework_get_config", 00:09:37.528 "framework_get_subsystems", 00:09:37.528 "trace_get_info", 00:09:37.528 "trace_get_tpoint_group_mask", 00:09:37.528 "trace_disable_tpoint_group", 00:09:37.528 "trace_enable_tpoint_group", 00:09:37.528 "trace_clear_tpoint_mask", 00:09:37.528 "trace_set_tpoint_mask", 00:09:37.528 "keyring_get_keys", 00:09:37.528 "spdk_get_version", 00:09:37.528 "rpc_get_methods" 00:09:37.528 ] 00:09:37.528 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.528 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:37.528 10:19:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3303020 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3303020 ']' 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3303020 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.528 10:19:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3303020 00:09:37.786 10:19:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.786 10:19:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.786 10:19:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3303020' 00:09:37.786 killing process with pid 3303020 00:09:37.786 10:19:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3303020 00:09:37.786 10:19:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3303020 00:09:38.046 00:09:38.046 real 0m1.763s 00:09:38.046 user 0m3.294s 00:09:38.046 sys 0m0.581s 00:09:38.046 10:19:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.046 10:19:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:38.046 ************************************ 00:09:38.046 END TEST spdkcli_tcp 00:09:38.046 ************************************ 00:09:38.046 10:19:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:38.046 10:19:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.046 10:19:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.046 10:19:50 -- common/autotest_common.sh@10 -- # set +x 00:09:38.046 ************************************ 00:09:38.046 START TEST dpdk_mem_utility 00:09:38.046 ************************************ 00:09:38.046 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:38.305 * Looking for test storage... 00:09:38.305 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:09:38.305 10:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:38.305 10:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3303390 00:09:38.305 10:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:09:38.305 10:19:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3303390 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3303390 ']' 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.305 10:19:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:38.305 [2024-07-26 10:19:51.033583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:38.305 [2024-07-26 10:19:51.033642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303390 ] 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.305 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:38.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.306 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:38.306 [2024-07-26 10:19:51.160971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.306 [2024-07-26 10:19:51.204627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.245 10:19:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.245 10:19:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:39.245 10:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:39.245 10:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:39.245 10:19:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.245 10:19:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:39.245 { 00:09:39.245 "filename": "/tmp/spdk_mem_dump.txt" 00:09:39.245 } 00:09:39.245 10:19:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.245 10:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:39.245 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:39.245 1 heaps totaling size 814.000000 MiB 00:09:39.245 size: 814.000000 MiB heap id: 0 00:09:39.245 end heaps---------- 00:09:39.245 8 mempools totaling size 598.116089 MiB 00:09:39.245 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:39.245 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:39.245 size: 84.521057 MiB name: bdev_io_3303390 00:09:39.245 size: 51.011292 MiB name: evtpool_3303390 00:09:39.245 size: 50.003479 MiB name: msgpool_3303390 00:09:39.245 size: 21.763794 MiB name: PDU_Pool 00:09:39.245 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:39.245 size: 0.026123 MiB name: Session_Pool 00:09:39.245 end mempools------- 00:09:39.245 201 memzones totaling size 4.173645 MiB 00:09:39.245 size: 1.000366 MiB name: RG_ring_0_3303390 00:09:39.245 size: 1.000366 MiB name: RG_ring_1_3303390 00:09:39.245 size: 1.000366 MiB name: RG_ring_4_3303390 00:09:39.245 size: 1.000366 MiB name: RG_ring_5_3303390 00:09:39.245 size: 0.125366 MiB name: RG_ring_2_3303390 00:09:39.245 size: 0.015991 MiB name: RG_ring_3_3303390 00:09:39.245 size: 0.001282 MiB name: QAT_SYM_CAPA_GEN_1 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:01.7_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1a:02.7_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:01.7_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1c:02.7_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:01.7_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.0_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.1_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.2_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.3_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.4_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.5_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.6_qat 00:09:39.245 size: 0.000244 MiB name: 0000:1e:02.7_qat 00:09:39.245 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_0 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_0 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_1 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_2 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_1 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_3 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_4 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_2 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_5 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_6 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_3 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_7 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_8 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_4 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_9 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_10 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_5 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_11 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_12 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_6 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_13 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_14 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_7 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_15 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_16 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_8 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_17 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_18 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_9 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_19 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_20 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_10 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_21 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_22 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_11 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_23 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_24 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_12 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_25 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_26 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_13 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_27 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_28 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_14 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_29 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_30 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_15 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_31 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_32 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_16 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_33 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_34 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_17 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_35 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_36 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_18 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_37 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_38 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_19 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_39 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_40 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_20 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_41 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_42 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_21 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_43 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_44 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_22 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_45 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_46 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_23 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_47 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_48 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_24 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_49 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_50 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_25 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_51 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_52 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_26 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_53 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_54 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_27 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_55 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_56 00:09:39.245 size: 0.000122 MiB name: rte_compressdev_data_28 00:09:39.245 size: 0.000122 MiB name: rte_cryptodev_data_57 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_58 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_29 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_59 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_60 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_30 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_61 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_62 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_31 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_63 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_64 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_32 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_65 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_66 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_33 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_67 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_68 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_34 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_69 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_70 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_35 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_71 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_72 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_36 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_73 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_74 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_37 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_75 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_76 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_38 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_77 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_78 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_39 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_79 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_80 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_40 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_81 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_82 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_41 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_83 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_84 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_42 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_85 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_86 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_43 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_87 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_88 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_44 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_89 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_90 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_45 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_91 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_92 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_46 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_93 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_94 00:09:39.246 size: 0.000122 MiB name: rte_compressdev_data_47 00:09:39.246 size: 0.000122 MiB name: rte_cryptodev_data_95 00:09:39.246 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:09:39.246 end memzones------- 00:09:39.246 10:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:39.246 heap id: 0 total size: 814.000000 MiB number of busy elements: 636 number of free elements: 14 00:09:39.246 list of free elements. size: 11.787354 MiB 00:09:39.246 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:39.246 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:39.246 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:39.246 element at address: 0x200003e00000 with size: 0.996460 MiB 00:09:39.246 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:39.246 element at address: 0x200013800000 with size: 0.978882 MiB 00:09:39.246 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:39.246 element at address: 0x200019200000 with size: 0.937256 MiB 00:09:39.246 element at address: 0x20001aa00000 with size: 0.564941 MiB 00:09:39.246 element at address: 0x200003a00000 with size: 0.498535 MiB 00:09:39.246 element at address: 0x20000b200000 with size: 0.489807 MiB 00:09:39.246 element at address: 0x200000800000 with size: 0.486511 MiB 00:09:39.246 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:39.246 element at address: 0x200027e00000 with size: 0.395752 MiB 00:09:39.246 list of standard malloc elements. size: 199.895447 MiB 00:09:39.246 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:39.246 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:39.246 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:39.246 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:39.246 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:39.246 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:39.246 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:39.246 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:39.246 element at address: 0x20000032d3c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000330e00 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000334840 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000338280 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000033bcc0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000033f700 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000343140 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000346b80 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000034a5c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000034e000 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000351a40 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000355480 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000358ec0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000035c900 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000360340 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000363d80 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003677c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000036b200 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000036ec40 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000372680 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003760c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000379b00 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000037d540 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000380f80 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003849c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000388400 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000038be40 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000038f880 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003932c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x200000396d00 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000039a740 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000039e180 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003a1bc0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003a5600 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003a9040 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003aca80 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003b04c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003b3f00 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003b7940 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003bb380 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003bedc0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003c2800 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003c6240 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003c9c80 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003cd6c0 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003d1100 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003d4b40 with size: 0.004333 MiB 00:09:39.246 element at address: 0x2000003d8d40 with size: 0.004333 MiB 00:09:39.246 element at address: 0x20000032b2c0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000032c340 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000032ed00 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000032fd80 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000332740 with size: 0.004028 MiB 00:09:39.246 element at address: 0x2000003337c0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000336180 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000337200 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000339bc0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000033ac40 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000033d600 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000033e680 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000341040 with size: 0.004028 MiB 00:09:39.246 element at address: 0x2000003420c0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000344a80 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000345b00 with size: 0.004028 MiB 00:09:39.246 element at address: 0x2000003484c0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000349540 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000034bf00 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000034cf80 with size: 0.004028 MiB 00:09:39.246 element at address: 0x20000034f940 with size: 0.004028 MiB 00:09:39.246 element at address: 0x2000003509c0 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000353380 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000354400 with size: 0.004028 MiB 00:09:39.246 element at address: 0x200000356dc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000357e40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000035a800 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000035b880 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000035e240 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000035f2c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000361c80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000362d00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003656c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000366740 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000369100 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000036a180 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000036cb40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000036dbc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000370580 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000371600 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000373fc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000375040 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000377a00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000378a80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000037b440 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000037c4c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000037ee80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000037ff00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003828c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000383940 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000386300 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000387380 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000389d40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000038adc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000038d780 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000038e800 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003911c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000392240 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000394c00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000395c80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000398640 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003996c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000039c080 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000039d100 with size: 0.004028 MiB 00:09:39.247 element at address: 0x20000039fac0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003a0b40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003a3500 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003a4580 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003a6f40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003a7fc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003aa980 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003aba00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003ae3c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003af440 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003b1e00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003b2e80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003b5840 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003b68c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003b9280 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003ba300 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003bccc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003bdd40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c0700 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c1780 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c4140 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c51c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c7b80 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003c8c00 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003cb5c0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003cc640 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003cf000 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003d0080 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003d2a40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003d3ac0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003d6c40 with size: 0.004028 MiB 00:09:39.247 element at address: 0x2000003d7cc0 with size: 0.004028 MiB 00:09:39.247 element at address: 0x200000200640 with size: 0.000305 MiB 00:09:39.247 element at address: 0x200000200000 with size: 0.000244 MiB 00:09:39.247 element at address: 0x200000200100 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002001c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200280 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200340 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200400 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002004c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200580 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200780 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200840 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200900 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002009c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200a80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200b40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200c00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200cc0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200d80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200e40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200f00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000200fc0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201080 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201140 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201200 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002012c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201380 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201440 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201500 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002015c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201680 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201740 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201800 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002018c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201980 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201a40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201b00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201bc0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201c80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201d40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201e00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201ec0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000201f80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202040 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202100 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002021c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202280 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202340 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202400 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002024c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202580 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202640 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000202840 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000206b00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000226dc0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000226e80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000226f40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227000 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002270c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227180 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227240 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227300 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002273c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227480 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227540 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227600 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002276c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227780 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227840 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227900 with size: 0.000183 MiB 00:09:39.247 element at address: 0x2000002279c0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227a80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227b40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227d40 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227e00 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227ec0 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000227f80 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000228040 with size: 0.000183 MiB 00:09:39.247 element at address: 0x200000228100 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000002281c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228280 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228340 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228400 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000002284c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228580 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228640 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228700 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000002287c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000228880 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032aa80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032ab40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032ad00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032aec0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032af80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032e580 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032e740 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032e900 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000032e9c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000331fc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000332180 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000332340 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000332400 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000335a00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000335bc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000335d80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000335e40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000339440 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000339600 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003397c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000339880 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000033ce80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000033d040 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000033d200 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000033d2c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003408c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000340a80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000340c40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000340d00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000344300 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003444c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000344680 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000344740 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000347d40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000347f00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003480c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000348180 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034b780 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034b940 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034bb00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034bbc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034f1c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034f380 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034f540 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000034f600 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000352c00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000352dc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000352f80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000353040 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000356640 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000356800 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003569c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000356a80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035a080 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035a240 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035a400 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035a4c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035dac0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035dc80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035de40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000035df00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000361500 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003616c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000361880 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000361940 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000364f40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000365100 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003652c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000365380 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000368980 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000368b40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000368d00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000368dc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036c3c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036c580 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036c740 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036c800 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036fe00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000036ffc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000370180 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000370240 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000373840 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000373a00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000373bc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000373c80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000377280 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000377440 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000377600 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003776c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037acc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037ae80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037b040 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037b100 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037e700 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037e8c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037ea80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000037eb40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000382140 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000382300 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003824c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000382580 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000385b80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000385d40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000385f00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000385fc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003895c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000389780 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000389940 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000389a00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000038d000 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000038d1c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000038d380 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000038d440 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000390a40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000390c00 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000390dc0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000390e80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000394480 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000394640 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000394800 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003948c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000397ec0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000398080 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000398240 with size: 0.000183 MiB 00:09:39.248 element at address: 0x200000398300 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039b900 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039bac0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039bc80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039bd40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039f340 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039f500 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039f6c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x20000039f780 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a2d80 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a2f40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a3100 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a31c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a67c0 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a6980 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a6b40 with size: 0.000183 MiB 00:09:39.248 element at address: 0x2000003a6c00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003aa200 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003aa3c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003aa580 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003aa640 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003adc40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003ade00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003adfc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003ae080 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b1680 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b1840 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b1a00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b1ac0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b50c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b5280 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b5440 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b5500 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b8b00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b8cc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b8e80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003b8f40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003bc540 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003bc700 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003bc8c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003bc980 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003bff80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c0140 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c0300 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c03c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c3d40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c3e00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c7400 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c75c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c7780 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003c7840 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cae40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cb000 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cb1c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cb280 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003ce880 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cea40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cec00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003cecc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d22c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d2480 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d2640 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d2700 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d5e40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d60c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d6840 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000003d6900 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20000b27db80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:39.249 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:39.250 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e65500 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:39.250 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:39.250 list of memzone associated elements. size: 602.317200 MiB 00:09:39.250 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:39.250 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:39.250 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:39.250 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:39.250 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:39.250 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3303390_0 00:09:39.250 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:39.250 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3303390_0 00:09:39.250 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:39.250 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3303390_0 00:09:39.251 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:39.251 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:39.251 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:39.251 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:39.251 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:39.251 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3303390 00:09:39.251 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:39.251 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3303390 00:09:39.251 element at address: 0x200000228940 with size: 1.008118 MiB 00:09:39.251 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3303390 00:09:39.251 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:39.251 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:39.251 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:39.251 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:39.251 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:39.251 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:39.251 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:39.251 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:39.251 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:39.251 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3303390 00:09:39.251 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:39.251 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3303390 00:09:39.251 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:39.251 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3303390 00:09:39.251 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:39.251 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3303390 00:09:39.251 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:09:39.251 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3303390 00:09:39.251 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:09:39.251 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:39.251 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:39.251 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:39.251 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:39.251 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:39.251 element at address: 0x200000206bc0 with size: 0.125488 MiB 00:09:39.251 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3303390 00:09:39.251 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:39.251 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:39.251 element at address: 0x200027e65680 with size: 0.023743 MiB 00:09:39.251 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:39.251 element at address: 0x200000202900 with size: 0.016113 MiB 00:09:39.251 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3303390 00:09:39.251 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:09:39.251 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:39.251 element at address: 0x2000003d6280 with size: 0.001404 MiB 00:09:39.251 associated memzone info: size: 0.001282 MiB name: QAT_SYM_CAPA_GEN_1 00:09:39.251 element at address: 0x2000003d6ac0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.0_qat 00:09:39.251 element at address: 0x2000003d28c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.1_qat 00:09:39.251 element at address: 0x2000003cee80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.2_qat 00:09:39.251 element at address: 0x2000003cb440 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.3_qat 00:09:39.251 element at address: 0x2000003c7a00 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.4_qat 00:09:39.251 element at address: 0x2000003c3fc0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.5_qat 00:09:39.251 element at address: 0x2000003c0580 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.6_qat 00:09:39.251 element at address: 0x2000003bcb40 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:01.7_qat 00:09:39.251 element at address: 0x2000003b9100 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.0_qat 00:09:39.251 element at address: 0x2000003b56c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.1_qat 00:09:39.251 element at address: 0x2000003b1c80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.2_qat 00:09:39.251 element at address: 0x2000003ae240 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.3_qat 00:09:39.251 element at address: 0x2000003aa800 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.4_qat 00:09:39.251 element at address: 0x2000003a6dc0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.5_qat 00:09:39.251 element at address: 0x2000003a3380 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.6_qat 00:09:39.251 element at address: 0x20000039f940 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1a:02.7_qat 00:09:39.251 element at address: 0x20000039bf00 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.0_qat 00:09:39.251 element at address: 0x2000003984c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.1_qat 00:09:39.251 element at address: 0x200000394a80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.2_qat 00:09:39.251 element at address: 0x200000391040 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.3_qat 00:09:39.251 element at address: 0x20000038d600 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.4_qat 00:09:39.251 element at address: 0x200000389bc0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.5_qat 00:09:39.251 element at address: 0x200000386180 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.6_qat 00:09:39.251 element at address: 0x200000382740 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:01.7_qat 00:09:39.251 element at address: 0x20000037ed00 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.0_qat 00:09:39.251 element at address: 0x20000037b2c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.1_qat 00:09:39.251 element at address: 0x200000377880 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.2_qat 00:09:39.251 element at address: 0x200000373e40 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.3_qat 00:09:39.251 element at address: 0x200000370400 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.4_qat 00:09:39.251 element at address: 0x20000036c9c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.5_qat 00:09:39.251 element at address: 0x200000368f80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.6_qat 00:09:39.251 element at address: 0x200000365540 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1c:02.7_qat 00:09:39.251 element at address: 0x200000361b00 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.0_qat 00:09:39.251 element at address: 0x20000035e0c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.1_qat 00:09:39.251 element at address: 0x20000035a680 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.2_qat 00:09:39.251 element at address: 0x200000356c40 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.3_qat 00:09:39.251 element at address: 0x200000353200 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.4_qat 00:09:39.251 element at address: 0x20000034f7c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.5_qat 00:09:39.251 element at address: 0x20000034bd80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.6_qat 00:09:39.251 element at address: 0x200000348340 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:01.7_qat 00:09:39.251 element at address: 0x200000344900 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.0_qat 00:09:39.251 element at address: 0x200000340ec0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.1_qat 00:09:39.251 element at address: 0x20000033d480 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.2_qat 00:09:39.251 element at address: 0x200000339a40 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.3_qat 00:09:39.251 element at address: 0x200000336000 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.4_qat 00:09:39.251 element at address: 0x2000003325c0 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.5_qat 00:09:39.251 element at address: 0x20000032eb80 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.6_qat 00:09:39.251 element at address: 0x20000032b140 with size: 0.000366 MiB 00:09:39.251 associated memzone info: size: 0.000244 MiB name: 0000:1e:02.7_qat 00:09:39.251 element at address: 0x2000003d5d00 with size: 0.000305 MiB 00:09:39.251 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:09:39.252 element at address: 0x200000227c00 with size: 0.000305 MiB 00:09:39.252 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3303390 00:09:39.252 element at address: 0x200000202700 with size: 0.000305 MiB 00:09:39.252 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3303390 00:09:39.252 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:09:39.252 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:39.252 element at address: 0x2000003d69c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:09:39.252 element at address: 0x2000003d6180 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:09:39.252 element at address: 0x2000003d5f00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:09:39.252 element at address: 0x2000003d27c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:09:39.252 element at address: 0x2000003d2540 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:09:39.252 element at address: 0x2000003d2380 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:09:39.252 element at address: 0x2000003ced80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:09:39.252 element at address: 0x2000003ceb00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:09:39.252 element at address: 0x2000003ce940 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:09:39.252 element at address: 0x2000003cb340 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:09:39.252 element at address: 0x2000003cb0c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:09:39.252 element at address: 0x2000003caf00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:09:39.252 element at address: 0x2000003c7900 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:09:39.252 element at address: 0x2000003c7680 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:09:39.252 element at address: 0x2000003c74c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:09:39.252 element at address: 0x2000003c3ec0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:09:39.252 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:09:39.252 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:09:39.252 element at address: 0x2000003c0480 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:09:39.252 element at address: 0x2000003c0200 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:09:39.252 element at address: 0x2000003c0040 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:09:39.252 element at address: 0x2000003bca40 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:09:39.252 element at address: 0x2000003bc7c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:09:39.252 element at address: 0x2000003bc600 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:09:39.252 element at address: 0x2000003b9000 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:09:39.252 element at address: 0x2000003b8d80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:09:39.252 element at address: 0x2000003b8bc0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:09:39.252 element at address: 0x2000003b55c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:09:39.252 element at address: 0x2000003b5340 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:09:39.252 element at address: 0x2000003b5180 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:09:39.252 element at address: 0x2000003b1b80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:09:39.252 element at address: 0x2000003b1900 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:09:39.252 element at address: 0x2000003b1740 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:09:39.252 element at address: 0x2000003ae140 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:09:39.252 element at address: 0x2000003adec0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:09:39.252 element at address: 0x2000003add00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:09:39.252 element at address: 0x2000003aa700 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:09:39.252 element at address: 0x2000003aa480 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:09:39.252 element at address: 0x2000003aa2c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:09:39.252 element at address: 0x2000003a6cc0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:09:39.252 element at address: 0x2000003a6a40 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:09:39.252 element at address: 0x2000003a6880 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:09:39.252 element at address: 0x2000003a3280 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:09:39.252 element at address: 0x2000003a3000 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:09:39.252 element at address: 0x2000003a2e40 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:09:39.252 element at address: 0x20000039f840 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:09:39.252 element at address: 0x20000039f5c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:09:39.252 element at address: 0x20000039f400 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:09:39.252 element at address: 0x20000039be00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:09:39.252 element at address: 0x20000039bb80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:09:39.252 element at address: 0x20000039b9c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:09:39.252 element at address: 0x2000003983c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:09:39.252 element at address: 0x200000398140 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:09:39.252 element at address: 0x200000397f80 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:09:39.252 element at address: 0x200000394980 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:09:39.252 element at address: 0x200000394700 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:09:39.252 element at address: 0x200000394540 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:09:39.252 element at address: 0x200000390f40 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:09:39.252 element at address: 0x200000390cc0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:09:39.252 element at address: 0x200000390b00 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:09:39.252 element at address: 0x20000038d500 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:09:39.252 element at address: 0x20000038d280 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:09:39.252 element at address: 0x20000038d0c0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:09:39.252 element at address: 0x200000389ac0 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:09:39.252 element at address: 0x200000389840 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:09:39.252 element at address: 0x200000389680 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:09:39.252 element at address: 0x200000386080 with size: 0.000244 MiB 00:09:39.252 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:09:39.252 element at address: 0x200000385e00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:09:39.253 element at address: 0x200000385c40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:09:39.253 element at address: 0x200000382640 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:09:39.253 element at address: 0x2000003823c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:09:39.253 element at address: 0x200000382200 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:09:39.253 element at address: 0x20000037ec00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:09:39.253 element at address: 0x20000037e980 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:09:39.253 element at address: 0x20000037e7c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:09:39.253 element at address: 0x20000037b1c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:09:39.253 element at address: 0x20000037af40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:09:39.253 element at address: 0x20000037ad80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:09:39.253 element at address: 0x200000377780 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:09:39.253 element at address: 0x200000377500 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:09:39.253 element at address: 0x200000377340 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:09:39.253 element at address: 0x200000373d40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:09:39.253 element at address: 0x200000373ac0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:09:39.253 element at address: 0x200000373900 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:09:39.253 element at address: 0x200000370300 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:09:39.253 element at address: 0x200000370080 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:09:39.253 element at address: 0x20000036fec0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:09:39.253 element at address: 0x20000036c8c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:09:39.253 element at address: 0x20000036c640 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:09:39.253 element at address: 0x20000036c480 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:09:39.253 element at address: 0x200000368e80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:09:39.253 element at address: 0x200000368c00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:09:39.253 element at address: 0x200000368a40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:09:39.253 element at address: 0x200000365440 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:09:39.253 element at address: 0x2000003651c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:09:39.253 element at address: 0x200000365000 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:09:39.253 element at address: 0x200000361a00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:09:39.253 element at address: 0x200000361780 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:09:39.253 element at address: 0x2000003615c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:09:39.253 element at address: 0x20000035dfc0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:09:39.253 element at address: 0x20000035dd40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:09:39.253 element at address: 0x20000035db80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:09:39.253 element at address: 0x20000035a580 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:09:39.253 element at address: 0x20000035a300 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:09:39.253 element at address: 0x20000035a140 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:09:39.253 element at address: 0x200000356b40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:09:39.253 element at address: 0x2000003568c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:09:39.253 element at address: 0x200000356700 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:09:39.253 element at address: 0x200000353100 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:09:39.253 element at address: 0x200000352e80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:09:39.253 element at address: 0x200000352cc0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:09:39.253 element at address: 0x20000034f6c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:09:39.253 element at address: 0x20000034f440 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:09:39.253 element at address: 0x20000034f280 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:09:39.253 element at address: 0x20000034bc80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:09:39.253 element at address: 0x20000034ba00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:09:39.253 element at address: 0x20000034b840 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:09:39.253 element at address: 0x200000348240 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:09:39.253 element at address: 0x200000347fc0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:09:39.253 element at address: 0x200000347e00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:09:39.253 element at address: 0x200000344800 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:09:39.253 element at address: 0x200000344580 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:09:39.253 element at address: 0x2000003443c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:09:39.253 element at address: 0x200000340dc0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:09:39.253 element at address: 0x200000340b40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:09:39.253 element at address: 0x200000340980 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:09:39.253 element at address: 0x20000033d380 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:09:39.253 element at address: 0x20000033d100 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:09:39.253 element at address: 0x20000033cf40 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:09:39.253 element at address: 0x200000339940 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:09:39.253 element at address: 0x2000003396c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:09:39.253 element at address: 0x200000339500 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:09:39.253 element at address: 0x200000335f00 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:09:39.253 element at address: 0x200000335c80 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:09:39.253 element at address: 0x200000335ac0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:09:39.253 element at address: 0x2000003324c0 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:09:39.253 element at address: 0x200000332240 with size: 0.000244 MiB 00:09:39.253 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:09:39.253 element at address: 0x200000332080 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:09:39.254 element at address: 0x20000032ea80 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:09:39.254 element at address: 0x20000032e800 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:09:39.254 element at address: 0x20000032e640 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:09:39.254 element at address: 0x20000032b040 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:09:39.254 element at address: 0x20000032adc0 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:09:39.254 element at address: 0x20000032ac00 with size: 0.000244 MiB 00:09:39.254 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:09:39.254 element at address: 0x2000003d6000 with size: 0.000183 MiB 00:09:39.254 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:09:39.254 10:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:39.254 10:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3303390 00:09:39.254 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3303390 ']' 00:09:39.254 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3303390 00:09:39.254 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:39.254 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.254 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3303390 00:09:39.513 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.513 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.513 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3303390' 00:09:39.513 killing process with pid 3303390 00:09:39.513 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3303390 00:09:39.513 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3303390 00:09:39.773 00:09:39.773 real 0m1.643s 00:09:39.773 user 0m1.782s 00:09:39.773 sys 0m0.539s 00:09:39.773 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.773 10:19:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:39.773 ************************************ 00:09:39.773 END TEST dpdk_mem_utility 00:09:39.773 ************************************ 00:09:39.773 10:19:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:09:39.773 10:19:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.773 10:19:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.773 10:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:39.773 ************************************ 00:09:39.773 START TEST event 00:09:39.773 ************************************ 00:09:39.773 10:19:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:09:40.033 * Looking for test storage... 00:09:40.033 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:09:40.033 10:19:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:40.033 10:19:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:40.033 10:19:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:40.033 10:19:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:40.033 10:19:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.033 10:19:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.033 ************************************ 00:09:40.033 START TEST event_perf 00:09:40.033 ************************************ 00:09:40.033 10:19:52 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:40.033 Running I/O for 1 seconds...[2024-07-26 10:19:52.749932] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:40.033 [2024-07-26 10:19:52.749997] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303717 ] 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:40.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.033 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:40.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.034 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:40.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.034 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:40.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.034 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:40.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:40.034 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:40.034 [2024-07-26 10:19:52.882865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.034 [2024-07-26 10:19:52.930158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.034 [2024-07-26 10:19:52.930182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.034 [2024-07-26 10:19:52.930271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.034 [2024-07-26 10:19:52.930268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.414 Running I/O for 1 seconds... 00:09:41.414 lcore 0: 188839 00:09:41.414 lcore 1: 188838 00:09:41.414 lcore 2: 188837 00:09:41.414 lcore 3: 188839 00:09:41.414 done. 00:09:41.414 00:09:41.414 real 0m1.268s 00:09:41.414 user 0m4.117s 00:09:41.414 sys 0m0.144s 00:09:41.414 10:19:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.414 10:19:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:41.414 ************************************ 00:09:41.414 END TEST event_perf 00:09:41.414 ************************************ 00:09:41.414 10:19:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:41.414 10:19:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:41.414 10:19:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.414 10:19:54 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.414 ************************************ 00:09:41.414 START TEST event_reactor 00:09:41.414 ************************************ 00:09:41.414 10:19:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:41.414 [2024-07-26 10:19:54.109503] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:41.414 [2024-07-26 10:19:54.109563] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303998 ] 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:41.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.414 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:41.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.415 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:41.415 [2024-07-26 10:19:54.244391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.415 [2024-07-26 10:19:54.286888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.793 test_start 00:09:42.794 oneshot 00:09:42.794 tick 100 00:09:42.794 tick 100 00:09:42.794 tick 250 00:09:42.794 tick 100 00:09:42.794 tick 100 00:09:42.794 tick 100 00:09:42.794 tick 250 00:09:42.794 tick 500 00:09:42.794 tick 100 00:09:42.794 tick 100 00:09:42.794 tick 250 00:09:42.794 tick 100 00:09:42.794 tick 100 00:09:42.794 test_end 00:09:42.794 00:09:42.794 real 0m1.269s 00:09:42.794 user 0m1.124s 00:09:42.794 sys 0m0.138s 00:09:42.794 10:19:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.794 10:19:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:42.794 ************************************ 00:09:42.794 END TEST event_reactor 00:09:42.794 ************************************ 00:09:42.794 10:19:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:42.794 10:19:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:42.794 10:19:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.794 10:19:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:42.794 ************************************ 00:09:42.794 START TEST event_reactor_perf 00:09:42.794 ************************************ 00:09:42.794 10:19:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:42.794 [2024-07-26 10:19:55.458782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:42.794 [2024-07-26 10:19:55.458841] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304286 ] 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:42.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.794 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:42.794 [2024-07-26 10:19:55.591036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.794 [2024-07-26 10:19:55.632864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.170 test_start 00:09:44.170 test_end 00:09:44.170 Performance: 355530 events per second 00:09:44.170 00:09:44.170 real 0m1.265s 00:09:44.170 user 0m1.117s 00:09:44.170 sys 0m0.143s 00:09:44.170 10:19:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.170 10:19:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:44.170 ************************************ 00:09:44.170 END TEST event_reactor_perf 00:09:44.170 ************************************ 00:09:44.170 10:19:56 event -- event/event.sh@49 -- # uname -s 00:09:44.170 10:19:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:44.170 10:19:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:44.170 10:19:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.170 10:19:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.170 10:19:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.170 ************************************ 00:09:44.170 START TEST event_scheduler 00:09:44.170 ************************************ 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:44.170 * Looking for test storage... 00:09:44.170 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:09:44.170 10:19:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:44.170 10:19:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3304589 00:09:44.170 10:19:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.170 10:19:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:44.170 10:19:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3304589 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3304589 ']' 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.170 10:19:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.170 [2024-07-26 10:19:56.960662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:44.170 [2024-07-26 10:19:56.960723] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304589 ] 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.170 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:44.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:44.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:44.171 [2024-07-26 10:19:57.066302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.430 [2024-07-26 10:19:57.107256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.430 [2024-07-26 10:19:57.107345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.430 [2024-07-26 10:19:57.107407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.430 [2024-07-26 10:19:57.107409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:44.430 10:19:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 [2024-07-26 10:19:57.156068] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:44.430 [2024-07-26 10:19:57.156089] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:44.430 [2024-07-26 10:19:57.156100] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:44.430 [2024-07-26 10:19:57.156108] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:44.430 [2024-07-26 10:19:57.156115] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 [2024-07-26 10:19:57.236636] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 ************************************ 00:09:44.430 START TEST scheduler_create_thread 00:09:44.430 ************************************ 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 2 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 3 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 4 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 5 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.430 6 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.430 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.689 7 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.689 8 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.689 9 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.689 10 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.689 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.987 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.987 10:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:44.987 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.987 10:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.891 10:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.891 10:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:46.891 10:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:46.891 10:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.891 10:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.827 10:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.827 00:09:47.827 real 0m3.101s 00:09:47.827 user 0m0.022s 00:09:47.827 sys 0m0.009s 00:09:47.827 10:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.827 10:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:47.827 ************************************ 00:09:47.827 END TEST scheduler_create_thread 00:09:47.827 ************************************ 00:09:47.827 10:20:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:47.827 10:20:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3304589 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3304589 ']' 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3304589 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3304589 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3304589' 00:09:47.827 killing process with pid 3304589 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3304589 00:09:47.827 10:20:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3304589 00:09:48.086 [2024-07-26 10:20:00.755938] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:48.086 00:09:48.086 real 0m4.171s 00:09:48.086 user 0m6.604s 00:09:48.086 sys 0m0.473s 00:09:48.086 10:20:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.086 10:20:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:48.086 ************************************ 00:09:48.086 END TEST event_scheduler 00:09:48.086 ************************************ 00:09:48.346 10:20:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:48.346 10:20:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:48.346 10:20:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.346 10:20:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.346 10:20:01 event -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 ************************************ 00:09:48.346 START TEST app_repeat 00:09:48.346 ************************************ 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3305191 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3305191' 00:09:48.346 Process app_repeat pid: 3305191 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:48.346 spdk_app_start Round 0 00:09:48.346 10:20:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3305191 /var/tmp/spdk-nbd.sock 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3305191 ']' 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:48.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.346 10:20:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 [2024-07-26 10:20:01.082428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:09:48.346 [2024-07-26 10:20:01.082496] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3305191 ] 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:48.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.346 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:48.346 [2024-07-26 10:20:01.215195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.606 [2024-07-26 10:20:01.259693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.606 [2024-07-26 10:20:01.259698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.606 10:20:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.606 10:20:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:48.606 10:20:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:48.865 Malloc0 00:09:48.865 10:20:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:49.123 Malloc1 00:09:49.124 10:20:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.124 10:20:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:49.383 /dev/nbd0 00:09:49.383 10:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:49.383 10:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:49.383 1+0 records in 00:09:49.383 1+0 records out 00:09:49.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245673 s, 16.7 MB/s 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:49.383 10:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:49.383 10:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.383 10:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.383 10:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:49.642 /dev/nbd1 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:49.642 1+0 records in 00:09:49.642 1+0 records out 00:09:49.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246125 s, 16.6 MB/s 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:49.642 10:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.642 10:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:49.902 { 00:09:49.902 "nbd_device": "/dev/nbd0", 00:09:49.902 "bdev_name": "Malloc0" 00:09:49.902 }, 00:09:49.902 { 00:09:49.902 "nbd_device": "/dev/nbd1", 00:09:49.902 "bdev_name": "Malloc1" 00:09:49.902 } 00:09:49.902 ]' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:49.902 { 00:09:49.902 "nbd_device": "/dev/nbd0", 00:09:49.902 "bdev_name": "Malloc0" 00:09:49.902 }, 00:09:49.902 { 00:09:49.902 "nbd_device": "/dev/nbd1", 00:09:49.902 "bdev_name": "Malloc1" 00:09:49.902 } 00:09:49.902 ]' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:49.902 /dev/nbd1' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:49.902 /dev/nbd1' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:49.902 256+0 records in 00:09:49.902 256+0 records out 00:09:49.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114424 s, 91.6 MB/s 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:49.902 256+0 records in 00:09:49.902 256+0 records out 00:09:49.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017049 s, 61.5 MB/s 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:49.902 256+0 records in 00:09:49.902 256+0 records out 00:09:49.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182791 s, 57.4 MB/s 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.902 10:20:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.162 10:20:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.421 10:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:50.684 10:20:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:50.684 10:20:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:50.945 10:20:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:51.204 [2024-07-26 10:20:03.952839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.204 [2024-07-26 10:20:03.992950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.204 [2024-07-26 10:20:03.992955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.204 [2024-07-26 10:20:04.037258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:51.204 [2024-07-26 10:20:04.037305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:54.524 10:20:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:54.524 10:20:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:54.524 spdk_app_start Round 1 00:09:54.524 10:20:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3305191 /var/tmp/spdk-nbd.sock 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3305191 ']' 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:54.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.524 10:20:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:54.524 10:20:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.524 Malloc0 00:09:54.524 10:20:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:54.783 Malloc1 00:09:54.783 10:20:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.783 10:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:54.783 /dev/nbd0 00:09:54.784 10:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.784 10:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:54.784 1+0 records in 00:09:54.784 1+0 records out 00:09:54.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197517 s, 20.7 MB/s 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:54.784 10:20:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:54.784 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.784 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.784 10:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:55.042 /dev/nbd1 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.042 1+0 records in 00:09:55.042 1+0 records out 00:09:55.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179394 s, 22.8 MB/s 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:55.042 10:20:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.042 10:20:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.301 10:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:55.301 { 00:09:55.301 "nbd_device": "/dev/nbd0", 00:09:55.301 "bdev_name": "Malloc0" 00:09:55.301 }, 00:09:55.301 { 00:09:55.301 "nbd_device": "/dev/nbd1", 00:09:55.301 "bdev_name": "Malloc1" 00:09:55.301 } 00:09:55.301 ]' 00:09:55.301 10:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:55.301 { 00:09:55.301 "nbd_device": "/dev/nbd0", 00:09:55.301 "bdev_name": "Malloc0" 00:09:55.301 }, 00:09:55.301 { 00:09:55.301 "nbd_device": "/dev/nbd1", 00:09:55.301 "bdev_name": "Malloc1" 00:09:55.301 } 00:09:55.301 ]' 00:09:55.301 10:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:55.560 /dev/nbd1' 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:55.560 /dev/nbd1' 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:55.560 10:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:55.561 256+0 records in 00:09:55.561 256+0 records out 00:09:55.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109196 s, 96.0 MB/s 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:55.561 256+0 records in 00:09:55.561 256+0 records out 00:09:55.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169464 s, 61.9 MB/s 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:55.561 256+0 records in 00:09:55.561 256+0 records out 00:09:55.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180037 s, 58.2 MB/s 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.561 10:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.819 10:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.078 10:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:56.336 10:20:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:56.336 10:20:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:56.594 10:20:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:56.853 [2024-07-26 10:20:09.542916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.853 [2024-07-26 10:20:09.583471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.853 [2024-07-26 10:20:09.583475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.853 [2024-07-26 10:20:09.628910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:56.853 [2024-07-26 10:20:09.628958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:00.138 10:20:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:00.138 10:20:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:00.138 spdk_app_start Round 2 00:10:00.138 10:20:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3305191 /var/tmp/spdk-nbd.sock 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3305191 ']' 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.138 10:20:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:00.138 10:20:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:00.138 Malloc0 00:10:00.138 10:20:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:00.138 Malloc1 00:10:00.138 10:20:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:00.138 10:20:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:00.397 /dev/nbd0 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:00.397 1+0 records in 00:10:00.397 1+0 records out 00:10:00.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162838 s, 25.2 MB/s 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:00.397 10:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.397 10:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:00.655 /dev/nbd1 00:10:00.655 10:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:00.655 10:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:00.655 1+0 records in 00:10:00.655 1+0 records out 00:10:00.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265542 s, 15.4 MB/s 00:10:00.655 10:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:00.913 10:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:00.913 10:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:10:00.913 10:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:00.913 10:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:00.913 { 00:10:00.913 "nbd_device": "/dev/nbd0", 00:10:00.913 "bdev_name": "Malloc0" 00:10:00.913 }, 00:10:00.913 { 00:10:00.913 "nbd_device": "/dev/nbd1", 00:10:00.913 "bdev_name": "Malloc1" 00:10:00.913 } 00:10:00.913 ]' 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:00.913 { 00:10:00.913 "nbd_device": "/dev/nbd0", 00:10:00.913 "bdev_name": "Malloc0" 00:10:00.913 }, 00:10:00.913 { 00:10:00.913 "nbd_device": "/dev/nbd1", 00:10:00.913 "bdev_name": "Malloc1" 00:10:00.913 } 00:10:00.913 ]' 00:10:00.913 10:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:01.172 /dev/nbd1' 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:01.172 /dev/nbd1' 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:01.172 256+0 records in 00:10:01.172 256+0 records out 00:10:01.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00876319 s, 120 MB/s 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.172 10:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:01.172 256+0 records in 00:10:01.172 256+0 records out 00:10:01.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171074 s, 61.3 MB/s 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:01.173 256+0 records in 00:10:01.173 256+0 records out 00:10:01.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180834 s, 58.0 MB/s 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.173 10:20:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.432 10:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.691 10:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:01.950 10:20:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:01.950 10:20:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:02.209 10:20:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:02.468 [2024-07-26 10:20:15.166086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.468 [2024-07-26 10:20:15.206664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.468 [2024-07-26 10:20:15.206669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.468 [2024-07-26 10:20:15.251078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:02.468 [2024-07-26 10:20:15.251133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:05.755 10:20:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3305191 /var/tmp/spdk-nbd.sock 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3305191 ']' 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.755 10:20:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:05.755 10:20:18 event.app_repeat -- event/event.sh@39 -- # killprocess 3305191 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3305191 ']' 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3305191 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3305191 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3305191' 00:10:05.755 killing process with pid 3305191 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3305191 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3305191 00:10:05.755 spdk_app_start is called in Round 0. 00:10:05.755 Shutdown signal received, stop current app iteration 00:10:05.755 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:10:05.755 spdk_app_start is called in Round 1. 00:10:05.755 Shutdown signal received, stop current app iteration 00:10:05.755 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:10:05.755 spdk_app_start is called in Round 2. 00:10:05.755 Shutdown signal received, stop current app iteration 00:10:05.755 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 reinitialization... 00:10:05.755 spdk_app_start is called in Round 3. 00:10:05.755 Shutdown signal received, stop current app iteration 00:10:05.755 10:20:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:05.755 10:20:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:05.755 00:10:05.755 real 0m17.368s 00:10:05.755 user 0m37.703s 00:10:05.755 sys 0m3.559s 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.755 10:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:05.755 ************************************ 00:10:05.755 END TEST app_repeat 00:10:05.755 ************************************ 00:10:05.755 10:20:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:05.755 00:10:05.755 real 0m25.866s 00:10:05.755 user 0m50.861s 00:10:05.755 sys 0m4.826s 00:10:05.755 10:20:18 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.755 10:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:10:05.755 ************************************ 00:10:05.755 END TEST event 00:10:05.755 ************************************ 00:10:05.755 10:20:18 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:10:05.755 10:20:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.755 10:20:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.755 10:20:18 -- common/autotest_common.sh@10 -- # set +x 00:10:05.755 ************************************ 00:10:05.755 START TEST thread 00:10:05.755 ************************************ 00:10:05.755 10:20:18 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:10:05.755 * Looking for test storage... 00:10:05.755 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:10:05.755 10:20:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:05.755 10:20:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:05.755 10:20:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.755 10:20:18 thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.015 ************************************ 00:10:06.015 START TEST thread_poller_perf 00:10:06.015 ************************************ 00:10:06.015 10:20:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:06.015 [2024-07-26 10:20:18.711104] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:06.015 [2024-07-26 10:20:18.711166] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308576 ] 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:06.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.015 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:06.015 [2024-07-26 10:20:18.842614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.015 [2024-07-26 10:20:18.885751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.015 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:07.393 ====================================== 00:10:07.393 busy:2508649092 (cyc) 00:10:07.393 total_run_count: 289000 00:10:07.393 tsc_hz: 2500000000 (cyc) 00:10:07.393 ====================================== 00:10:07.393 poller_cost: 8680 (cyc), 3472 (nsec) 00:10:07.393 00:10:07.393 real 0m1.276s 00:10:07.393 user 0m1.133s 00:10:07.393 sys 0m0.138s 00:10:07.393 10:20:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.393 10:20:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:07.393 ************************************ 00:10:07.393 END TEST thread_poller_perf 00:10:07.393 ************************************ 00:10:07.393 10:20:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.393 10:20:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:07.393 10:20:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.393 10:20:20 thread -- common/autotest_common.sh@10 -- # set +x 00:10:07.393 ************************************ 00:10:07.393 START TEST thread_poller_perf 00:10:07.393 ************************************ 00:10:07.393 10:20:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.393 [2024-07-26 10:20:20.072078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:07.393 [2024-07-26 10:20:20.072152] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308802 ] 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:07.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:07.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.394 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:07.394 [2024-07-26 10:20:20.204504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.394 [2024-07-26 10:20:20.248061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.394 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:08.774 ====================================== 00:10:08.774 busy:2502460460 (cyc) 00:10:08.774 total_run_count: 3791000 00:10:08.774 tsc_hz: 2500000000 (cyc) 00:10:08.774 ====================================== 00:10:08.774 poller_cost: 660 (cyc), 264 (nsec) 00:10:08.774 00:10:08.774 real 0m1.270s 00:10:08.774 user 0m1.125s 00:10:08.774 sys 0m0.140s 00:10:08.774 10:20:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.774 10:20:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:08.774 ************************************ 00:10:08.774 END TEST thread_poller_perf 00:10:08.774 ************************************ 00:10:08.774 10:20:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:08.774 00:10:08.774 real 0m2.824s 00:10:08.774 user 0m2.365s 00:10:08.774 sys 0m0.472s 00:10:08.774 10:20:21 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.774 10:20:21 thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.774 ************************************ 00:10:08.774 END TEST thread 00:10:08.774 ************************************ 00:10:08.774 10:20:21 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:10:08.774 10:20:21 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:10:08.774 10:20:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.774 10:20:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.774 10:20:21 -- common/autotest_common.sh@10 -- # set +x 00:10:08.774 ************************************ 00:10:08.774 START TEST accel 00:10:08.774 ************************************ 00:10:08.774 10:20:21 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:10:08.774 * Looking for test storage... 00:10:08.774 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:08.774 10:20:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:08.774 10:20:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:08.774 10:20:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:08.774 10:20:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3309100 00:10:08.774 10:20:21 accel -- accel/accel.sh@63 -- # waitforlisten 3309100 00:10:08.774 10:20:21 accel -- common/autotest_common.sh@831 -- # '[' -z 3309100 ']' 00:10:08.774 10:20:21 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.774 10:20:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:08.774 10:20:21 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:08.774 10:20:21 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.774 10:20:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:08.774 10:20:21 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.775 10:20:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:08.775 10:20:21 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.775 10:20:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.775 10:20:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:08.775 10:20:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.775 10:20:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:08.775 10:20:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:08.775 10:20:21 accel -- accel/accel.sh@41 -- # jq -r . 00:10:08.775 [2024-07-26 10:20:21.625990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:08.775 [2024-07-26 10:20:21.626059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309100 ] 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:09.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.034 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:09.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.035 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:09.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.035 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:09.035 [2024-07-26 10:20:21.759093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.035 [2024-07-26 10:20:21.804662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.638 10:20:22 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.638 10:20:22 accel -- common/autotest_common.sh@864 -- # return 0 00:10:09.638 10:20:22 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:09.638 10:20:22 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:09.638 10:20:22 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:09.638 10:20:22 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:09.638 10:20:22 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:09.638 10:20:22 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:09.638 10:20:22 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:09.638 10:20:22 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.638 10:20:22 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.638 10:20:22 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.897 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.897 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.897 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.897 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.897 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.897 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.897 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.898 10:20:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.898 10:20:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.898 10:20:22 accel -- accel/accel.sh@75 -- # killprocess 3309100 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@950 -- # '[' -z 3309100 ']' 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@954 -- # kill -0 3309100 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@955 -- # uname 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3309100 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3309100' 00:10:09.898 killing process with pid 3309100 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@969 -- # kill 3309100 00:10:09.898 10:20:22 accel -- common/autotest_common.sh@974 -- # wait 3309100 00:10:10.157 10:20:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:10.157 10:20:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:10.157 10:20:22 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.157 10:20:22 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.157 10:20:22 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 10:20:22 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:10:10.157 10:20:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:10:10.157 10:20:23 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.157 10:20:23 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:10.416 10:20:23 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:10.416 10:20:23 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:10.416 10:20:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.416 10:20:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.416 ************************************ 00:10:10.416 START TEST accel_missing_filename 00:10:10.416 ************************************ 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.416 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:10.416 10:20:23 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:10.416 [2024-07-26 10:20:23.153311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:10.416 [2024-07-26 10:20:23.153374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309411 ] 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:10.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.416 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:10.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.417 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:10.417 [2024-07-26 10:20:23.286748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.677 [2024-07-26 10:20:23.330458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.677 [2024-07-26 10:20:23.386114] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.677 [2024-07-26 10:20:23.447690] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:10.677 A filename is required. 00:10:10.677 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:10:10.677 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.678 00:10:10.678 real 0m0.399s 00:10:10.678 user 0m0.226s 00:10:10.678 sys 0m0.203s 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.678 10:20:23 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:10.678 ************************************ 00:10:10.678 END TEST accel_missing_filename 00:10:10.678 ************************************ 00:10:10.678 10:20:23 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:10.678 10:20:23 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:10:10.678 10:20:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.678 10:20:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.938 ************************************ 00:10:10.938 START TEST accel_compress_verify 00:10:10.938 ************************************ 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:10.938 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.939 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:10.939 10:20:23 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:10.939 [2024-07-26 10:20:23.623232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:10.939 [2024-07-26 10:20:23.623288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309511 ] 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:10.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:10.939 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:10.939 [2024-07-26 10:20:23.751067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.939 [2024-07-26 10:20:23.794961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.197 [2024-07-26 10:20:23.854612] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.197 [2024-07-26 10:20:23.918556] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:10:11.197 00:10:11.197 Compression does not support the verify option, aborting. 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.197 00:10:11.197 real 0m0.397s 00:10:11.197 user 0m0.242s 00:10:11.197 sys 0m0.188s 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.197 10:20:23 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:11.197 ************************************ 00:10:11.197 END TEST accel_compress_verify 00:10:11.197 ************************************ 00:10:11.197 10:20:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:11.197 10:20:24 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:11.197 10:20:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.197 10:20:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:11.197 ************************************ 00:10:11.197 START TEST accel_wrong_workload 00:10:11.197 ************************************ 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:11.197 10:20:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:11.197 Unsupported workload type: foobar 00:10:11.197 [2024-07-26 10:20:24.088265] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:11.197 accel_perf options: 00:10:11.197 [-h help message] 00:10:11.197 [-q queue depth per core] 00:10:11.197 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:11.197 [-T number of threads per core 00:10:11.197 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:11.197 [-t time in seconds] 00:10:11.197 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:11.197 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:11.197 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:11.197 [-l for compress/decompress workloads, name of uncompressed input file 00:10:11.197 [-S for crc32c workload, use this seed value (default 0) 00:10:11.197 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:11.197 [-f for fill workload, use this BYTE value (default 255) 00:10:11.197 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:11.197 [-y verify result if this switch is on] 00:10:11.197 [-a tasks to allocate per core (default: same value as -q)] 00:10:11.197 Can be used to spread operations across a wider range of memory. 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.197 00:10:11.197 real 0m0.038s 00:10:11.197 user 0m0.020s 00:10:11.197 sys 0m0.017s 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.197 10:20:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:11.197 ************************************ 00:10:11.197 END TEST accel_wrong_workload 00:10:11.197 ************************************ 00:10:11.456 Error: writing output failed: Broken pipe 00:10:11.456 10:20:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:11.456 ************************************ 00:10:11.456 START TEST accel_negative_buffers 00:10:11.456 ************************************ 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:11.456 10:20:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:11.456 -x option must be non-negative. 00:10:11.456 [2024-07-26 10:20:24.191831] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:11.456 accel_perf options: 00:10:11.456 [-h help message] 00:10:11.456 [-q queue depth per core] 00:10:11.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:11.456 [-T number of threads per core 00:10:11.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:11.456 [-t time in seconds] 00:10:11.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:11.456 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:11.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:11.456 [-l for compress/decompress workloads, name of uncompressed input file 00:10:11.456 [-S for crc32c workload, use this seed value (default 0) 00:10:11.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:11.456 [-f for fill workload, use this BYTE value (default 255) 00:10:11.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:11.456 [-y verify result if this switch is on] 00:10:11.456 [-a tasks to allocate per core (default: same value as -q)] 00:10:11.456 Can be used to spread operations across a wider range of memory. 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.456 00:10:11.456 real 0m0.032s 00:10:11.456 user 0m0.014s 00:10:11.456 sys 0m0.017s 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.456 10:20:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:11.456 ************************************ 00:10:11.456 END TEST accel_negative_buffers 00:10:11.456 ************************************ 00:10:11.456 Error: writing output failed: Broken pipe 00:10:11.456 10:20:24 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.456 10:20:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:11.456 ************************************ 00:10:11.456 START TEST accel_crc32c 00:10:11.456 ************************************ 00:10:11.456 10:20:24 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:11.456 10:20:24 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:11.456 [2024-07-26 10:20:24.313270] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:11.456 [2024-07-26 10:20:24.313334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309579 ] 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:11.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.715 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:11.715 [2024-07-26 10:20:24.434405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.715 [2024-07-26 10:20:24.478194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.715 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:11.716 10:20:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:13.089 10:20:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.089 00:10:13.089 real 0m1.394s 00:10:13.089 user 0m0.009s 00:10:13.089 sys 0m0.001s 00:10:13.089 10:20:25 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.089 10:20:25 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:13.089 ************************************ 00:10:13.089 END TEST accel_crc32c 00:10:13.089 ************************************ 00:10:13.089 10:20:25 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:13.089 10:20:25 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:13.089 10:20:25 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.089 10:20:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.089 ************************************ 00:10:13.089 START TEST accel_crc32c_C2 00:10:13.089 ************************************ 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:13.089 10:20:25 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:13.089 [2024-07-26 10:20:25.770715] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:13.089 [2024-07-26 10:20:25.770770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309859 ] 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:13.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.089 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:13.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:13.090 [2024-07-26 10:20:25.903157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.090 [2024-07-26 10:20:25.946801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.347 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:13.348 10:20:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:14.282 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.283 00:10:14.283 real 0m1.404s 00:10:14.283 user 0m0.006s 00:10:14.283 sys 0m0.002s 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.283 10:20:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:14.283 ************************************ 00:10:14.283 END TEST accel_crc32c_C2 00:10:14.283 ************************************ 00:10:14.283 10:20:27 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:14.283 10:20:27 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:14.283 10:20:27 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.283 10:20:27 accel -- common/autotest_common.sh@10 -- # set +x 00:10:14.542 ************************************ 00:10:14.542 START TEST accel_copy 00:10:14.542 ************************************ 00:10:14.542 10:20:27 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:14.542 10:20:27 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:14.542 [2024-07-26 10:20:27.247640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:14.542 [2024-07-26 10:20:27.247695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310142 ] 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:14.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.542 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:14.542 [2024-07-26 10:20:27.379126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.542 [2024-07-26 10:20:27.422996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.801 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:14.802 10:20:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:15.736 10:20:28 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.736 00:10:15.736 real 0m1.406s 00:10:15.736 user 0m0.006s 00:10:15.736 sys 0m0.002s 00:10:15.736 10:20:28 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.736 10:20:28 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:15.736 ************************************ 00:10:15.736 END TEST accel_copy 00:10:15.736 ************************************ 00:10:15.994 10:20:28 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.994 10:20:28 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:15.994 10:20:28 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.994 10:20:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.994 ************************************ 00:10:15.994 START TEST accel_fill 00:10:15.994 ************************************ 00:10:15.994 10:20:28 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.994 10:20:28 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:15.994 10:20:28 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:15.994 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:15.994 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:15.994 10:20:28 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:15.995 10:20:28 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:15.995 [2024-07-26 10:20:28.735334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:15.995 [2024-07-26 10:20:28.735392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310421 ] 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:15.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:15.995 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:15.995 [2024-07-26 10:20:28.869705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.254 [2024-07-26 10:20:28.913342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:16.254 10:20:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:17.631 10:20:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.631 00:10:17.631 real 0m1.406s 00:10:17.631 user 0m0.006s 00:10:17.631 sys 0m0.004s 00:10:17.631 10:20:30 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.631 10:20:30 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:17.631 ************************************ 00:10:17.631 END TEST accel_fill 00:10:17.631 ************************************ 00:10:17.631 10:20:30 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:17.631 10:20:30 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:17.631 10:20:30 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.631 10:20:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.631 ************************************ 00:10:17.631 START TEST accel_copy_crc32c 00:10:17.631 ************************************ 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:17.631 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:17.631 [2024-07-26 10:20:30.220554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:17.631 [2024-07-26 10:20:30.220610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310707 ] 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:17.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:17.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.632 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:17.632 [2024-07-26 10:20:30.351673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.632 [2024-07-26 10:20:30.394873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:17.632 10:20:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.007 00:10:19.007 real 0m1.405s 00:10:19.007 user 0m0.006s 00:10:19.007 sys 0m0.003s 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.007 10:20:31 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:19.007 ************************************ 00:10:19.007 END TEST accel_copy_crc32c 00:10:19.007 ************************************ 00:10:19.007 10:20:31 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:19.007 10:20:31 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:19.007 10:20:31 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.007 10:20:31 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.007 ************************************ 00:10:19.007 START TEST accel_copy_crc32c_C2 00:10:19.007 ************************************ 00:10:19.007 10:20:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:19.007 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.007 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:19.008 10:20:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:19.008 [2024-07-26 10:20:31.707417] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:19.008 [2024-07-26 10:20:31.707552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310986 ] 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:19.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.008 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:19.267 [2024-07-26 10:20:31.914027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.267 [2024-07-26 10:20:31.962783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:19.267 10:20:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.639 00:10:20.639 real 0m1.496s 00:10:20.639 user 0m0.009s 00:10:20.639 sys 0m0.000s 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.639 10:20:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:20.639 ************************************ 00:10:20.639 END TEST accel_copy_crc32c_C2 00:10:20.639 ************************************ 00:10:20.639 10:20:33 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:20.639 10:20:33 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:20.639 10:20:33 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.639 10:20:33 accel -- common/autotest_common.sh@10 -- # set +x 00:10:20.639 ************************************ 00:10:20.639 START TEST accel_dualcast 00:10:20.639 ************************************ 00:10:20.639 10:20:33 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:20.639 [2024-07-26 10:20:33.274553] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:20.639 [2024-07-26 10:20:33.274612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311272 ] 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.639 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.639 [2024-07-26 10:20:33.407986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.639 [2024-07-26 10:20:33.449966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:20.639 10:20:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:22.009 10:20:34 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:22.009 00:10:22.009 real 0m1.405s 00:10:22.009 user 0m0.009s 00:10:22.009 sys 0m0.001s 00:10:22.010 10:20:34 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.010 10:20:34 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:22.010 ************************************ 00:10:22.010 END TEST accel_dualcast 00:10:22.010 ************************************ 00:10:22.010 10:20:34 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:22.010 10:20:34 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:22.010 10:20:34 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.010 10:20:34 accel -- common/autotest_common.sh@10 -- # set +x 00:10:22.010 ************************************ 00:10:22.010 START TEST accel_compare 00:10:22.010 ************************************ 00:10:22.010 10:20:34 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:22.010 10:20:34 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:22.010 [2024-07-26 10:20:34.751944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:22.010 [2024-07-26 10:20:34.752001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311551 ] 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:22.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:22.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:22.010 [2024-07-26 10:20:34.883047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.269 [2024-07-26 10:20:34.926256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:22.269 10:20:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:23.651 10:20:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.651 00:10:23.651 real 0m1.405s 00:10:23.651 user 0m0.007s 00:10:23.651 sys 0m0.001s 00:10:23.651 10:20:36 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.651 10:20:36 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 ************************************ 00:10:23.651 END TEST accel_compare 00:10:23.651 ************************************ 00:10:23.651 10:20:36 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:23.651 10:20:36 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:23.651 10:20:36 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.651 10:20:36 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.651 ************************************ 00:10:23.651 START TEST accel_xor 00:10:23.651 ************************************ 00:10:23.651 10:20:36 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:23.651 10:20:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:23.651 [2024-07-26 10:20:36.238185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:23.651 [2024-07-26 10:20:36.238243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311832 ] 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:23.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.651 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:23.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.652 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:23.652 [2024-07-26 10:20:36.367714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.652 [2024-07-26 10:20:36.412269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.652 10:20:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:25.063 10:20:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.063 00:10:25.063 real 0m1.404s 00:10:25.063 user 0m1.234s 00:10:25.063 sys 0m0.175s 00:10:25.063 10:20:37 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.063 10:20:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:25.063 ************************************ 00:10:25.063 END TEST accel_xor 00:10:25.063 ************************************ 00:10:25.063 10:20:37 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:25.064 10:20:37 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:25.064 10:20:37 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.064 10:20:37 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.064 ************************************ 00:10:25.064 START TEST accel_xor 00:10:25.064 ************************************ 00:10:25.064 10:20:37 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:25.064 [2024-07-26 10:20:37.729645] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:25.064 [2024-07-26 10:20:37.729705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312123 ] 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:25.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.064 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:25.064 [2024-07-26 10:20:37.862662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.064 [2024-07-26 10:20:37.906928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.064 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:25.323 10:20:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:26.257 10:20:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:26.258 10:20:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.258 00:10:26.258 real 0m1.406s 00:10:26.258 user 0m1.218s 00:10:26.258 sys 0m0.195s 00:10:26.258 10:20:39 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.258 10:20:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:26.258 ************************************ 00:10:26.258 END TEST accel_xor 00:10:26.258 ************************************ 00:10:26.258 10:20:39 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:26.258 10:20:39 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:26.258 10:20:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.258 10:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.517 ************************************ 00:10:26.517 START TEST accel_dif_verify 00:10:26.517 ************************************ 00:10:26.517 10:20:39 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:26.517 10:20:39 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:26.517 [2024-07-26 10:20:39.216225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:26.517 [2024-07-26 10:20:39.216281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312400 ] 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:26.517 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:26.517 [2024-07-26 10:20:39.347235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.517 [2024-07-26 10:20:39.390788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:26.776 10:20:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:27.711 10:20:40 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.711 00:10:27.711 real 0m1.408s 00:10:27.711 user 0m1.228s 00:10:27.711 sys 0m0.189s 00:10:27.711 10:20:40 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.711 10:20:40 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:27.711 ************************************ 00:10:27.711 END TEST accel_dif_verify 00:10:27.711 ************************************ 00:10:27.969 10:20:40 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:27.969 10:20:40 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:27.969 10:20:40 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.969 10:20:40 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.969 ************************************ 00:10:27.969 START TEST accel_dif_generate 00:10:27.969 ************************************ 00:10:27.969 10:20:40 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:27.969 10:20:40 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:27.969 [2024-07-26 10:20:40.703258] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:27.969 [2024-07-26 10:20:40.703313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312687 ] 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:27.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.969 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:27.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.970 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:27.970 [2024-07-26 10:20:40.834585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.227 [2024-07-26 10:20:40.878150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.227 10:20:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:29.601 10:20:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.601 00:10:29.601 real 0m1.406s 00:10:29.601 user 0m1.220s 00:10:29.601 sys 0m0.191s 00:10:29.601 10:20:42 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.601 10:20:42 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:29.601 ************************************ 00:10:29.601 END TEST accel_dif_generate 00:10:29.601 ************************************ 00:10:29.601 10:20:42 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:29.601 10:20:42 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:29.601 10:20:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.601 10:20:42 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.601 ************************************ 00:10:29.601 START TEST accel_dif_generate_copy 00:10:29.601 ************************************ 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:29.601 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:29.601 [2024-07-26 10:20:42.198434] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:29.601 [2024-07-26 10:20:42.198491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312965 ] 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:29.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.601 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:29.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.602 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:29.602 [2024-07-26 10:20:42.328070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.602 [2024-07-26 10:20:42.370809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:29.602 10:20:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.974 00:10:30.974 real 0m1.406s 00:10:30.974 user 0m1.230s 00:10:30.974 sys 0m0.179s 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.974 10:20:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:30.974 ************************************ 00:10:30.974 END TEST accel_dif_generate_copy 00:10:30.974 ************************************ 00:10:30.974 10:20:43 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:30.974 10:20:43 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:30.974 10:20:43 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:30.974 10:20:43 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.974 10:20:43 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.974 ************************************ 00:10:30.974 START TEST accel_comp 00:10:30.974 ************************************ 00:10:30.974 10:20:43 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:30.974 10:20:43 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:30.974 10:20:43 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:30.975 10:20:43 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:30.975 [2024-07-26 10:20:43.688308] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:30.975 [2024-07-26 10:20:43.688362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313250 ] 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:30.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:30.975 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:30.975 [2024-07-26 10:20:43.821390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.975 [2024-07-26 10:20:43.865529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.233 10:20:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:32.165 10:20:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:32.424 10:20:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:32.424 10:20:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.424 00:10:32.424 real 0m1.412s 00:10:32.424 user 0m1.239s 00:10:32.424 sys 0m0.183s 00:10:32.424 10:20:45 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.424 10:20:45 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:32.424 ************************************ 00:10:32.424 END TEST accel_comp 00:10:32.424 ************************************ 00:10:32.424 10:20:45 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:32.424 10:20:45 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:32.424 10:20:45 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.424 10:20:45 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.424 ************************************ 00:10:32.424 START TEST accel_decomp 00:10:32.424 ************************************ 00:10:32.424 10:20:45 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:32.424 10:20:45 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:32.424 [2024-07-26 10:20:45.183177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:32.424 [2024-07-26 10:20:45.183233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313530 ] 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:32.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.424 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:32.424 [2024-07-26 10:20:45.313559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.682 [2024-07-26 10:20:45.357221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.682 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.683 10:20:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:34.057 10:20:46 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.057 00:10:34.057 real 0m1.408s 00:10:34.057 user 0m1.224s 00:10:34.057 sys 0m0.195s 00:10:34.057 10:20:46 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.057 10:20:46 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:34.057 ************************************ 00:10:34.057 END TEST accel_decomp 00:10:34.057 ************************************ 00:10:34.057 10:20:46 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.057 10:20:46 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:34.057 10:20:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.057 10:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:34.057 ************************************ 00:10:34.057 START TEST accel_decomp_full 00:10:34.057 ************************************ 00:10:34.057 10:20:46 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:34.057 [2024-07-26 10:20:46.672505] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:34.057 [2024-07-26 10:20:46.672561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313815 ] 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:34.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.057 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:34.057 [2024-07-26 10:20:46.804860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.057 [2024-07-26 10:20:46.848183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.057 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.058 10:20:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.430 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:35.431 10:20:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.431 00:10:35.431 real 0m1.420s 00:10:35.431 user 0m1.246s 00:10:35.431 sys 0m0.183s 00:10:35.431 10:20:48 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.431 10:20:48 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:35.431 ************************************ 00:10:35.431 END TEST accel_decomp_full 00:10:35.431 ************************************ 00:10:35.431 10:20:48 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:35.431 10:20:48 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:35.431 10:20:48 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.431 10:20:48 accel -- common/autotest_common.sh@10 -- # set +x 00:10:35.431 ************************************ 00:10:35.431 START TEST accel_decomp_mcore 00:10:35.431 ************************************ 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:35.431 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:35.431 [2024-07-26 10:20:48.169923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:35.431 [2024-07-26 10:20:48.169979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314092 ] 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:35.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.431 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:35.431 [2024-07-26 10:20:48.301073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.691 [2024-07-26 10:20:48.348851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.691 [2024-07-26 10:20:48.348944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.691 [2024-07-26 10:20:48.349004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.691 [2024-07-26 10:20:48.349008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:35.691 10:20:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.070 00:10:37.070 real 0m1.415s 00:10:37.070 user 0m4.603s 00:10:37.070 sys 0m0.194s 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.070 10:20:49 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:37.070 ************************************ 00:10:37.070 END TEST accel_decomp_mcore 00:10:37.070 ************************************ 00:10:37.070 10:20:49 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:37.070 10:20:49 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:37.071 10:20:49 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.071 10:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:10:37.071 ************************************ 00:10:37.071 START TEST accel_decomp_full_mcore 00:10:37.071 ************************************ 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:37.071 [2024-07-26 10:20:49.658219] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:37.071 [2024-07-26 10:20:49.658273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314382 ] 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:37.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.071 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:37.071 [2024-07-26 10:20:49.789122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.071 [2024-07-26 10:20:49.836855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.071 [2024-07-26 10:20:49.836951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.071 [2024-07-26 10:20:49.837035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.071 [2024-07-26 10:20:49.837039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.071 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:37.072 10:20:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.452 00:10:38.452 real 0m1.418s 00:10:38.452 user 0m4.638s 00:10:38.452 sys 0m0.192s 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.452 10:20:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:38.452 ************************************ 00:10:38.452 END TEST accel_decomp_full_mcore 00:10:38.452 ************************************ 00:10:38.452 10:20:51 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:38.452 10:20:51 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:38.452 10:20:51 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.452 10:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:10:38.452 ************************************ 00:10:38.452 START TEST accel_decomp_mthread 00:10:38.452 ************************************ 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:38.452 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.453 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.453 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:38.453 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:38.453 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:38.453 [2024-07-26 10:20:51.164908] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:38.453 [2024-07-26 10:20:51.164966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314630 ] 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:38.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.453 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:38.453 [2024-07-26 10:20:51.296276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.453 [2024-07-26 10:20:51.339573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:38.713 10:20:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.670 00:10:39.670 real 0m1.416s 00:10:39.670 user 0m1.240s 00:10:39.670 sys 0m0.186s 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.670 10:20:52 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:39.670 ************************************ 00:10:39.670 END TEST accel_decomp_mthread 00:10:39.670 ************************************ 00:10:39.941 10:20:52 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.941 10:20:52 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:39.941 10:20:52 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.941 10:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.941 ************************************ 00:10:39.941 START TEST accel_decomp_full_mthread 00:10:39.941 ************************************ 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:39.941 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:39.941 [2024-07-26 10:20:52.666175] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:39.941 [2024-07-26 10:20:52.666232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314896 ] 00:10:39.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.941 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:39.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.941 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:39.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:39.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.942 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:39.942 [2024-07-26 10:20:52.797688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.942 [2024-07-26 10:20:52.841129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:40.202 10:20:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.580 00:10:41.580 real 0m1.448s 00:10:41.580 user 0m1.271s 00:10:41.580 sys 0m0.184s 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.580 10:20:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 ************************************ 00:10:41.580 END TEST accel_decomp_full_mthread 00:10:41.580 ************************************ 00:10:41.580 10:20:54 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:10:41.580 10:20:54 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:10:41.580 10:20:54 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:10:41.580 10:20:54 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:41.580 10:20:54 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3315148 00:10:41.580 10:20:54 accel -- accel/accel.sh@63 -- # waitforlisten 3315148 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@831 -- # '[' -z 3315148 ']' 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.580 10:20:54 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.580 10:20:54 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.580 10:20:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.580 10:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 10:20:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.580 10:20:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.580 10:20:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.580 10:20:54 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:41.580 10:20:54 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:41.580 10:20:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:41.581 10:20:54 accel -- accel/accel.sh@41 -- # jq -r . 00:10:41.581 [2024-07-26 10:20:54.192786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:41.581 [2024-07-26 10:20:54.192851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315148 ] 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:41.581 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:41.581 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:41.581 [2024-07-26 10:20:54.324924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.581 [2024-07-26 10:20:54.369965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.149 [2024-07-26 10:20:54.963025] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@864 -- # return 0 00:10:42.408 10:20:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:42.408 10:20:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:42.408 10:20:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:42.408 10:20:55 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:10:42.408 10:20:55 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:10:42.408 10:20:55 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.408 10:20:55 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:10:42.408 10:20:55 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.408 "method": "compressdev_scan_accel_module", 00:10:42.408 10:20:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:42.408 10:20:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.408 10:20:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.408 10:20:55 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.667 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.667 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.667 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # IFS== 00:10:42.668 10:20:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:42.668 10:20:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:42.668 10:20:55 accel -- accel/accel.sh@75 -- # killprocess 3315148 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@950 -- # '[' -z 3315148 ']' 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@954 -- # kill -0 3315148 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@955 -- # uname 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3315148 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3315148' 00:10:42.668 killing process with pid 3315148 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@969 -- # kill 3315148 00:10:42.668 10:20:55 accel -- common/autotest_common.sh@974 -- # wait 3315148 00:10:42.927 10:20:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:42.927 10:20:55 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:42.927 10:20:55 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:42.927 10:20:55 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.927 10:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.927 ************************************ 00:10:42.927 START TEST accel_cdev_comp 00:10:42.927 ************************************ 00:10:42.927 10:20:55 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:42.927 10:20:55 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:10:42.927 [2024-07-26 10:20:55.793708] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:42.927 [2024-07-26 10:20:55.793765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315469 ] 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.186 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.187 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:43.187 [2024-07-26 10:20:55.924822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.187 [2024-07-26 10:20:55.968496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.756 [2024-07-26 10:20:56.560468] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:43.756 [2024-07-26 10:20:56.562896] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x9f18a0 PMD being used: compress_qat 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 [2024-07-26 10:20:56.566601] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x9f3740 PMD being used: compress_qat 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:43.756 10:20:56 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:45.135 10:20:57 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:45.135 00:10:45.135 real 0m1.948s 00:10:45.135 user 0m1.438s 00:10:45.135 sys 0m0.515s 00:10:45.135 10:20:57 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.135 10:20:57 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:10:45.135 ************************************ 00:10:45.135 END TEST accel_cdev_comp 00:10:45.135 ************************************ 00:10:45.135 10:20:57 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:45.135 10:20:57 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:45.135 10:20:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.135 10:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:10:45.135 ************************************ 00:10:45.135 START TEST accel_cdev_decomp 00:10:45.135 ************************************ 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:45.135 10:20:57 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:45.135 [2024-07-26 10:20:57.825804] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:45.135 [2024-07-26 10:20:57.825865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3315802 ] 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:45.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.135 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:45.135 [2024-07-26 10:20:57.961206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.135 [2024-07-26 10:20:58.004493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.704 [2024-07-26 10:20:58.596699] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:45.704 [2024-07-26 10:20:58.599108] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x26098a0 PMD being used: compress_qat 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.704 [2024-07-26 10:20:58.602984] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x260b740 PMD being used: compress_qat 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.704 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:45.963 10:20:58 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.899 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.899 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.899 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.899 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:46.900 00:10:46.900 real 0m1.949s 00:10:46.900 user 0m1.425s 00:10:46.900 sys 0m0.530s 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.900 10:20:59 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:46.900 ************************************ 00:10:46.900 END TEST accel_cdev_decomp 00:10:46.900 ************************************ 00:10:46.900 10:20:59 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:46.900 10:20:59 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:46.900 10:20:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.900 10:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:10:47.159 ************************************ 00:10:47.159 START TEST accel_cdev_decomp_full 00:10:47.159 ************************************ 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:47.159 10:20:59 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:47.159 [2024-07-26 10:20:59.857158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:47.159 [2024-07-26 10:20:59.857214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316092 ] 00:10:47.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.159 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:47.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.159 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:47.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.160 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:47.160 [2024-07-26 10:20:59.987176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.160 [2024-07-26 10:21:00.031525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.727 [2024-07-26 10:21:00.618850] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:47.727 [2024-07-26 10:21:00.621209] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdea8a0 PMD being used: compress_qat 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.727 [2024-07-26 10:21:00.624085] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xdea940 PMD being used: compress_qat 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:47.987 10:21:00 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:48.922 00:10:48.922 real 0m1.943s 00:10:48.922 user 0m1.419s 00:10:48.922 sys 0m0.530s 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.922 10:21:01 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:48.922 ************************************ 00:10:48.922 END TEST accel_cdev_decomp_full 00:10:48.922 ************************************ 00:10:48.922 10:21:01 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:48.922 10:21:01 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:48.922 10:21:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.922 10:21:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.182 ************************************ 00:10:49.182 START TEST accel_cdev_decomp_mcore 00:10:49.182 ************************************ 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:49.182 10:21:01 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:49.182 [2024-07-26 10:21:01.887970] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:49.182 [2024-07-26 10:21:01.888029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316621 ] 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:49.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:49.182 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:49.182 [2024-07-26 10:21:02.020760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.182 [2024-07-26 10:21:02.068532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.182 [2024-07-26 10:21:02.068626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.182 [2024-07-26 10:21:02.068687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.182 [2024-07-26 10:21:02.068690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.119 [2024-07-26 10:21:02.657444] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:50.119 [2024-07-26 10:21:02.659790] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x125ef70 PMD being used: compress_qat 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 [2024-07-26 10:21:02.664815] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac9019b8b0 PMD being used: compress_qat 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:50.119 [2024-07-26 10:21:02.665758] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac8819b8b0 PMD being used: compress_qat 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 [2024-07-26 10:21:02.666435] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1262bf0 PMD being used: compress_qat 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 [2024-07-26 10:21:02.666591] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fac8019b8b0 PMD being used: compress_qat 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:50.119 10:21:02 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.054 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:51.055 00:10:51.055 real 0m1.970s 00:10:51.055 user 0m6.459s 00:10:51.055 sys 0m0.531s 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.055 10:21:03 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 ************************************ 00:10:51.055 END TEST accel_cdev_decomp_mcore 00:10:51.055 ************************************ 00:10:51.055 10:21:03 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:51.055 10:21:03 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:51.055 10:21:03 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.055 10:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 ************************************ 00:10:51.055 START TEST accel_cdev_decomp_full_mcore 00:10:51.055 ************************************ 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:51.055 10:21:03 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:51.055 [2024-07-26 10:21:03.940520] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:51.055 [2024-07-26 10:21:03.940574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317075 ] 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.314 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:51.314 [2024-07-26 10:21:04.069402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.314 [2024-07-26 10:21:04.116780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.314 [2024-07-26 10:21:04.116875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.314 [2024-07-26 10:21:04.116959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.314 [2024-07-26 10:21:04.116962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.882 [2024-07-26 10:21:04.707646] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:51.882 [2024-07-26 10:21:04.710012] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23b5f70 PMD being used: compress_qat 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.882 [2024-07-26 10:21:04.714092] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fdcfc19b8b0 PMD being used: compress_qat 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:51.882 [2024-07-26 10:21:04.714986] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fdcf419b8b0 PMD being used: compress_qat 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.882 [2024-07-26 10:21:04.715693] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23b85a0 PMD being used: compress_qat 00:10:51.882 [2024-07-26 10:21:04.715900] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fdcec19b8b0 PMD being used: compress_qat 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.882 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:51.883 10:21:04 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:53.261 00:10:53.261 real 0m1.966s 00:10:53.261 user 0m6.445s 00:10:53.261 sys 0m0.545s 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.261 10:21:05 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:53.261 ************************************ 00:10:53.261 END TEST accel_cdev_decomp_full_mcore 00:10:53.261 ************************************ 00:10:53.261 10:21:05 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:53.261 10:21:05 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:53.261 10:21:05 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.261 10:21:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.261 ************************************ 00:10:53.261 START TEST accel_cdev_decomp_mthread 00:10:53.261 ************************************ 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:53.261 10:21:05 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:53.261 [2024-07-26 10:21:05.988727] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:53.261 [2024-07-26 10:21:05.988787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317742 ] 00:10:53.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:53.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.261 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:53.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.261 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:53.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.261 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:53.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.261 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:53.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:53.262 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:53.262 [2024-07-26 10:21:06.120133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.521 [2024-07-26 10:21:06.163673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.093 [2024-07-26 10:21:06.760340] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:54.093 [2024-07-26 10:21:06.762770] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xad18a0 PMD being used: compress_qat 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 [2024-07-26 10:21:06.767217] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xad3b40 PMD being used: compress_qat 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 [2024-07-26 10:21:06.769838] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xbf78b0 PMD being used: compress_qat 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.093 10:21:06 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:55.061 00:10:55.061 real 0m1.956s 00:10:55.061 user 0m1.430s 00:10:55.061 sys 0m0.531s 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.061 10:21:07 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:55.061 ************************************ 00:10:55.061 END TEST accel_cdev_decomp_mthread 00:10:55.061 ************************************ 00:10:55.061 10:21:07 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:55.061 10:21:07 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:55.061 10:21:07 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.061 10:21:07 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.321 ************************************ 00:10:55.321 START TEST accel_cdev_decomp_full_mthread 00:10:55.321 ************************************ 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.321 10:21:07 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.321 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:55.321 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:55.321 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:55.321 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:55.321 [2024-07-26 10:21:08.024684] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:55.321 [2024-07-26 10:21:08.024744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318079 ] 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.321 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:55.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:55.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.322 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:55.322 [2024-07-26 10:21:08.155993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.322 [2024-07-26 10:21:08.199849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.259 [2024-07-26 10:21:08.803549] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:56.259 [2024-07-26 10:21:08.805906] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1afa8a0 PMD being used: compress_qat 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 [2024-07-26 10:21:08.809462] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1afa650 PMD being used: compress_qat 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 [2024-07-26 10:21:08.811883] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1affa70 PMD being used: compress_qat 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.259 10:21:08 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:57.197 00:10:57.197 real 0m1.960s 00:10:57.197 user 0m1.428s 00:10:57.197 sys 0m0.533s 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.197 10:21:09 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 ************************************ 00:10:57.197 END TEST accel_cdev_decomp_full_mthread 00:10:57.197 ************************************ 00:10:57.197 10:21:09 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:10:57.197 10:21:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:57.197 10:21:10 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:57.197 10:21:10 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.197 10:21:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:57.197 10:21:10 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.197 10:21:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:57.197 10:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.197 10:21:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.197 10:21:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.197 10:21:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:57.197 10:21:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:57.197 10:21:10 accel -- accel/accel.sh@41 -- # jq -r . 00:10:57.197 ************************************ 00:10:57.197 START TEST accel_dif_functional_tests 00:10:57.197 ************************************ 00:10:57.197 10:21:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:57.456 [2024-07-26 10:21:10.100721] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:57.456 [2024-07-26 10:21:10.100778] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318590 ] 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:57.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.456 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:57.456 [2024-07-26 10:21:10.236293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.456 [2024-07-26 10:21:10.282718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.456 [2024-07-26 10:21:10.282828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.456 [2024-07-26 10:21:10.282837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.715 00:10:57.715 00:10:57.715 CUnit - A unit testing framework for C - Version 2.1-3 00:10:57.715 http://cunit.sourceforge.net/ 00:10:57.715 00:10:57.715 00:10:57.715 Suite: accel_dif 00:10:57.715 Test: verify: DIF generated, GUARD check ...passed 00:10:57.715 Test: verify: DIF generated, APPTAG check ...passed 00:10:57.715 Test: verify: DIF generated, REFTAG check ...passed 00:10:57.715 Test: verify: DIF not generated, GUARD check ...[2024-07-26 10:21:10.360257] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:57.715 passed 00:10:57.715 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 10:21:10.360318] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:57.715 passed 00:10:57.715 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 10:21:10.360349] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:57.715 passed 00:10:57.715 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:57.715 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 10:21:10.360414] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:57.715 passed 00:10:57.715 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:57.715 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:57.715 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:57.715 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 10:21:10.360555] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:57.715 passed 00:10:57.715 Test: verify copy: DIF generated, GUARD check ...passed 00:10:57.715 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:57.715 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:57.716 Test: verify copy: DIF not generated, GUARD check ...[2024-07-26 10:21:10.360703] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:57.716 passed 00:10:57.716 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-26 10:21:10.360735] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:57.716 passed 00:10:57.716 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-26 10:21:10.360773] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:57.716 passed 00:10:57.716 Test: generate copy: DIF generated, GUARD check ...passed 00:10:57.716 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:57.716 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:57.716 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:57.716 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:57.716 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:57.716 Test: generate copy: iovecs-len validate ...[2024-07-26 10:21:10.361004] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:57.716 passed 00:10:57.716 Test: generate copy: buffer alignment validate ...passed 00:10:57.716 00:10:57.716 Run Summary: Type Total Ran Passed Failed Inactive 00:10:57.716 suites 1 1 n/a 0 0 00:10:57.716 tests 26 26 26 0 0 00:10:57.716 asserts 115 115 115 0 n/a 00:10:57.716 00:10:57.716 Elapsed time = 0.002 seconds 00:10:57.716 00:10:57.716 real 0m0.490s 00:10:57.716 user 0m0.605s 00:10:57.716 sys 0m0.211s 00:10:57.716 10:21:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.716 10:21:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:57.716 ************************************ 00:10:57.716 END TEST accel_dif_functional_tests 00:10:57.716 ************************************ 00:10:57.716 00:10:57.716 real 0m49.133s 00:10:57.716 user 0m56.498s 00:10:57.716 sys 0m11.453s 00:10:57.716 10:21:10 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.716 10:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.716 ************************************ 00:10:57.716 END TEST accel 00:10:57.716 ************************************ 00:10:57.975 10:21:10 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:57.975 10:21:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:57.975 10:21:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.975 10:21:10 -- common/autotest_common.sh@10 -- # set +x 00:10:57.975 ************************************ 00:10:57.975 START TEST accel_rpc 00:10:57.975 ************************************ 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:57.975 * Looking for test storage... 00:10:57.975 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:57.975 10:21:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:57.975 10:21:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3318652 00:10:57.975 10:21:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3318652 00:10:57.975 10:21:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 3318652 ']' 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.975 10:21:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.975 [2024-07-26 10:21:10.844710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:57.975 [2024-07-26 10:21:10.844773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318652 ] 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:58.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.235 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:58.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.236 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:58.236 [2024-07-26 10:21:10.978763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.236 [2024-07-26 10:21:11.024015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.172 10:21:11 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.172 10:21:11 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:59.172 10:21:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:59.172 10:21:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:59.172 10:21:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:59.172 10:21:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:59.172 10:21:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:59.172 10:21:11 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.172 10:21:11 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.172 10:21:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 ************************************ 00:10:59.172 START TEST accel_assign_opcode 00:10:59.172 ************************************ 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 [2024-07-26 10:21:11.778342] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 [2024-07-26 10:21:11.786356] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 10:21:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:59.172 10:21:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.172 software 00:10:59.172 00:10:59.172 real 0m0.268s 00:10:59.172 user 0m0.047s 00:10:59.172 sys 0m0.016s 00:10:59.172 10:21:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.172 10:21:12 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 ************************************ 00:10:59.172 END TEST accel_assign_opcode 00:10:59.172 ************************************ 00:10:59.431 10:21:12 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3318652 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 3318652 ']' 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 3318652 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318652 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318652' 00:10:59.431 killing process with pid 3318652 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@969 -- # kill 3318652 00:10:59.431 10:21:12 accel_rpc -- common/autotest_common.sh@974 -- # wait 3318652 00:10:59.690 00:10:59.690 real 0m1.790s 00:10:59.690 user 0m1.806s 00:10:59.690 sys 0m0.616s 00:10:59.690 10:21:12 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.690 10:21:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.690 ************************************ 00:10:59.690 END TEST accel_rpc 00:10:59.690 ************************************ 00:10:59.690 10:21:12 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:59.690 10:21:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.690 10:21:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.690 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:59.690 ************************************ 00:10:59.690 START TEST app_cmdline 00:10:59.690 ************************************ 00:10:59.690 10:21:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:59.949 * Looking for test storage... 00:10:59.949 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:59.949 10:21:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:59.949 10:21:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3319010 00:10:59.949 10:21:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3319010 00:10:59.949 10:21:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:59.949 10:21:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3319010 ']' 00:10:59.949 10:21:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.949 10:21:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.950 10:21:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.950 10:21:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.950 10:21:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:59.950 [2024-07-26 10:21:12.720250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:10:59.950 [2024-07-26 10:21:12.720315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319010 ] 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:59.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.950 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:00.209 [2024-07-26 10:21:12.855620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.209 [2024-07-26 10:21:12.900726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.777 10:21:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.777 10:21:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:11:00.777 10:21:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:01.036 { 00:11:01.036 "version": "SPDK v24.09-pre git sha1 704257090", 00:11:01.036 "fields": { 00:11:01.036 "major": 24, 00:11:01.036 "minor": 9, 00:11:01.036 "patch": 0, 00:11:01.036 "suffix": "-pre", 00:11:01.036 "commit": "704257090" 00:11:01.036 } 00:11:01.036 } 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:01.036 10:21:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:01.036 10:21:13 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.295 request: 00:11:01.295 { 00:11:01.295 "method": "env_dpdk_get_mem_stats", 00:11:01.295 "req_id": 1 00:11:01.295 } 00:11:01.295 Got JSON-RPC error response 00:11:01.295 response: 00:11:01.295 { 00:11:01.295 "code": -32601, 00:11:01.295 "message": "Method not found" 00:11:01.295 } 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.295 10:21:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3319010 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3319010 ']' 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3319010 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319010 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319010' 00:11:01.295 killing process with pid 3319010 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@969 -- # kill 3319010 00:11:01.295 10:21:14 app_cmdline -- common/autotest_common.sh@974 -- # wait 3319010 00:11:01.863 00:11:01.863 real 0m1.933s 00:11:01.863 user 0m2.296s 00:11:01.863 sys 0m0.614s 00:11:01.863 10:21:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.863 10:21:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:01.863 ************************************ 00:11:01.863 END TEST app_cmdline 00:11:01.863 ************************************ 00:11:01.863 10:21:14 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:01.863 10:21:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:01.863 10:21:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.863 10:21:14 -- common/autotest_common.sh@10 -- # set +x 00:11:01.863 ************************************ 00:11:01.863 START TEST version 00:11:01.863 ************************************ 00:11:01.863 10:21:14 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:01.863 * Looking for test storage... 00:11:01.863 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:11:01.863 10:21:14 version -- app/version.sh@17 -- # get_header_version major 00:11:01.863 10:21:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # cut -f2 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.863 10:21:14 version -- app/version.sh@17 -- # major=24 00:11:01.863 10:21:14 version -- app/version.sh@18 -- # get_header_version minor 00:11:01.863 10:21:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # cut -f2 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.863 10:21:14 version -- app/version.sh@18 -- # minor=9 00:11:01.863 10:21:14 version -- app/version.sh@19 -- # get_header_version patch 00:11:01.863 10:21:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # cut -f2 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.863 10:21:14 version -- app/version.sh@19 -- # patch=0 00:11:01.863 10:21:14 version -- app/version.sh@20 -- # get_header_version suffix 00:11:01.863 10:21:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # cut -f2 00:11:01.863 10:21:14 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.863 10:21:14 version -- app/version.sh@20 -- # suffix=-pre 00:11:01.863 10:21:14 version -- app/version.sh@22 -- # version=24.9 00:11:01.863 10:21:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:01.863 10:21:14 version -- app/version.sh@28 -- # version=24.9rc0 00:11:01.863 10:21:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:11:01.863 10:21:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:01.863 10:21:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:01.863 10:21:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:01.863 00:11:01.863 real 0m0.195s 00:11:01.863 user 0m0.104s 00:11:01.863 sys 0m0.137s 00:11:01.863 10:21:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.863 10:21:14 version -- common/autotest_common.sh@10 -- # set +x 00:11:01.863 ************************************ 00:11:01.863 END TEST version 00:11:01.863 ************************************ 00:11:02.122 10:21:14 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:11:02.122 10:21:14 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:02.122 10:21:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:02.122 10:21:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.122 10:21:14 -- common/autotest_common.sh@10 -- # set +x 00:11:02.122 ************************************ 00:11:02.122 START TEST blockdev_general 00:11:02.122 ************************************ 00:11:02.122 10:21:14 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:02.122 * Looking for test storage... 00:11:02.122 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:02.122 10:21:14 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:02.122 10:21:14 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:02.122 10:21:14 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:02.122 10:21:14 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:02.122 10:21:14 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3319633 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:02.123 10:21:14 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 3319633 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 3319633 ']' 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.123 10:21:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.123 [2024-07-26 10:21:15.011850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:02.123 [2024-07-26 10:21:15.011910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319633 ] 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.383 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:02.383 [2024-07-26 10:21:15.148596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.383 [2024-07-26 10:21:15.192276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.319 10:21:15 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.319 10:21:15 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:11:03.319 10:21:15 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:03.319 10:21:15 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:11:03.319 10:21:15 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:03.319 10:21:15 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.319 10:21:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.319 [2024-07-26 10:21:16.103317] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.319 [2024-07-26 10:21:16.103369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.319 00:11:03.319 [2024-07-26 10:21:16.111299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.319 [2024-07-26 10:21:16.111324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.319 00:11:03.319 Malloc0 00:11:03.319 Malloc1 00:11:03.319 Malloc2 00:11:03.319 Malloc3 00:11:03.319 Malloc4 00:11:03.319 Malloc5 00:11:03.319 Malloc6 00:11:03.319 Malloc7 00:11:03.578 Malloc8 00:11:03.578 Malloc9 00:11:03.578 [2024-07-26 10:21:16.244604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:03.578 [2024-07-26 10:21:16.244649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.578 [2024-07-26 10:21:16.244667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf99000 00:11:03.578 [2024-07-26 10:21:16.244678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.578 [2024-07-26 10:21:16.245895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.578 [2024-07-26 10:21:16.245923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:03.578 TestPT 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:03.578 5000+0 records in 00:11:03.578 5000+0 records out 00:11:03.578 10240000 bytes (10 MB, 9.8 MiB) copied, 0.036376 s, 282 MB/s 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 AIO0 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.578 10:21:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 10:21:16 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:03.838 10:21:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.838 10:21:16 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:03.838 10:21:16 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:03.839 10:21:16 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f1408f8b-e0d3-4342-8ebe-1c7883c05890"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f1408f8b-e0d3-4342-8ebe-1c7883c05890",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "092fde67-b181-50c0-a3ef-ee7712e0a190"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "092fde67-b181-50c0-a3ef-ee7712e0a190",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f770f83c-599b-5499-8d09-8b0d90b6ece4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f770f83c-599b-5499-8d09-8b0d90b6ece4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b368092e-6227-5904-acc3-ed0ac227b81c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b368092e-6227-5904-acc3-ed0ac227b81c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9152497e-d853-55cf-822c-1fce7d526151"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9152497e-d853-55cf-822c-1fce7d526151",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0462d91b-1306-58f5-b8f2-0504336219c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0462d91b-1306-58f5-b8f2-0504336219c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "186032fb-bebb-5b38-8013-15fdd909ffb5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "186032fb-bebb-5b38-8013-15fdd909ffb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8e99edfa-2f25-561f-b861-6fe3cb007622"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8e99edfa-2f25-561f-b861-6fe3cb007622",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c303dae3-1fd2-58e8-9033-ecfd69d6e23a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c303dae3-1fd2-58e8-9033-ecfd69d6e23a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "669c3fe8-f6cc-55c0-bec7-6f2558796193"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "669c3fe8-f6cc-55c0-bec7-6f2558796193",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e1d078ae-2756-46f4-9346-b442832da487"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c025866-7973-4d11-9f4d-94fdd79e8cb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "166c2a4d-7298-4d99-8c89-a4ad4553ce99",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bea468bc-9407-4fb2-aa26-9b4679d4a53f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "130c28f9-9fd8-4738-afff-7cd41a6dd635",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "960fd66e-5cd9-4291-b11f-97ef087f60d2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96623534-3630-4468-98e2-5e9248c8a394"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "29c8dd47-c559-4452-89d1-e2aa7974b9f0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "928139da-84d0-4766-a864-cc4ca14dc303",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c1c421d4-ed59-45a3-b2b4-a65b554bacb6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c1c421d4-ed59-45a3-b2b4-a65b554bacb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:03.839 10:21:16 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:03.839 10:21:16 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:11:03.839 10:21:16 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:03.839 10:21:16 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 3319633 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 3319633 ']' 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 3319633 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319633 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319633' 00:11:03.839 killing process with pid 3319633 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@969 -- # kill 3319633 00:11:03.839 10:21:16 blockdev_general -- common/autotest_common.sh@974 -- # wait 3319633 00:11:04.408 10:21:17 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:04.408 10:21:17 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:04.408 10:21:17 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:04.408 10:21:17 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.408 10:21:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 ************************************ 00:11:04.408 START TEST bdev_hello_world 00:11:04.408 ************************************ 00:11:04.408 10:21:17 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:04.408 [2024-07-26 10:21:17.235826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:04.408 [2024-07-26 10:21:17.235890] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319930 ] 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:04.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:04.667 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.667 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:04.667 [2024-07-26 10:21:17.367812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.667 [2024-07-26 10:21:17.411965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.667 [2024-07-26 10:21:17.560904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:04.667 [2024-07-26 10:21:17.560951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:04.667 [2024-07-26 10:21:17.560966] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:04.667 [2024-07-26 10:21:17.568912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:04.667 [2024-07-26 10:21:17.568937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:04.926 [2024-07-26 10:21:17.576920] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:04.926 [2024-07-26 10:21:17.576943] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:04.926 [2024-07-26 10:21:17.647825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:04.926 [2024-07-26 10:21:17.647873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.926 [2024-07-26 10:21:17.647888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb2880 00:11:04.926 [2024-07-26 10:21:17.647899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.926 [2024-07-26 10:21:17.649191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.926 [2024-07-26 10:21:17.649221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:04.926 [2024-07-26 10:21:17.790603] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:04.926 [2024-07-26 10:21:17.790674] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:04.926 [2024-07-26 10:21:17.790728] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:04.926 [2024-07-26 10:21:17.790804] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:04.926 [2024-07-26 10:21:17.790880] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:04.926 [2024-07-26 10:21:17.790911] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:04.926 [2024-07-26 10:21:17.790974] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:04.926 00:11:04.926 [2024-07-26 10:21:17.791014] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:05.185 00:11:05.185 real 0m0.877s 00:11:05.185 user 0m0.529s 00:11:05.185 sys 0m0.302s 00:11:05.185 10:21:18 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.185 10:21:18 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:05.185 ************************************ 00:11:05.185 END TEST bdev_hello_world 00:11:05.185 ************************************ 00:11:05.444 10:21:18 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:05.444 10:21:18 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.444 10:21:18 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.444 10:21:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:05.444 ************************************ 00:11:05.444 START TEST bdev_bounds 00:11:05.444 ************************************ 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3320204 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3320204' 00:11:05.444 Process bdevio pid: 3320204 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3320204 00:11:05.444 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3320204 ']' 00:11:05.445 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.445 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.445 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.445 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.445 10:21:18 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:05.445 [2024-07-26 10:21:18.187864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:05.445 [2024-07-26 10:21:18.187922] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320204 ] 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:05.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:05.445 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:05.445 [2024-07-26 10:21:18.322103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.704 [2024-07-26 10:21:18.367461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.704 [2024-07-26 10:21:18.367556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.704 [2024-07-26 10:21:18.367560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.704 [2024-07-26 10:21:18.513937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:05.704 [2024-07-26 10:21:18.513989] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:05.704 [2024-07-26 10:21:18.514002] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:05.704 [2024-07-26 10:21:18.521951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:05.704 [2024-07-26 10:21:18.521975] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:05.704 [2024-07-26 10:21:18.529969] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:05.704 [2024-07-26 10:21:18.529992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:05.704 [2024-07-26 10:21:18.601320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:05.704 [2024-07-26 10:21:18.601370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.704 [2024-07-26 10:21:18.601388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe40540 00:11:05.704 [2024-07-26 10:21:18.601400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.704 [2024-07-26 10:21:18.602683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.704 [2024-07-26 10:21:18.602712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:06.270 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.270 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:06.270 10:21:19 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:06.530 I/O targets: 00:11:06.530 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:06.530 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:06.530 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:06.530 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:06.530 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:06.530 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:06.530 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:06.530 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:06.530 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:06.530 00:11:06.530 00:11:06.530 CUnit - A unit testing framework for C - Version 2.1-3 00:11:06.530 http://cunit.sourceforge.net/ 00:11:06.530 00:11:06.530 00:11:06.530 Suite: bdevio tests on: AIO0 00:11:06.530 Test: blockdev write read block ...passed 00:11:06.530 Test: blockdev write zeroes read block ...passed 00:11:06.530 Test: blockdev write zeroes read no split ...passed 00:11:06.530 Test: blockdev write zeroes read split ...passed 00:11:06.530 Test: blockdev write zeroes read split partial ...passed 00:11:06.530 Test: blockdev reset ...passed 00:11:06.530 Test: blockdev write read 8 blocks ...passed 00:11:06.530 Test: blockdev write read size > 128k ...passed 00:11:06.530 Test: blockdev write read invalid size ...passed 00:11:06.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.530 Test: blockdev write read max offset ...passed 00:11:06.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.530 Test: blockdev writev readv 8 blocks ...passed 00:11:06.530 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.530 Test: blockdev writev readv block ...passed 00:11:06.530 Test: blockdev writev readv size > 128k ...passed 00:11:06.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.530 Test: blockdev comparev and writev ...passed 00:11:06.530 Test: blockdev nvme passthru rw ...passed 00:11:06.530 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.530 Test: blockdev nvme admin passthru ...passed 00:11:06.530 Test: blockdev copy ...passed 00:11:06.530 Suite: bdevio tests on: raid1 00:11:06.530 Test: blockdev write read block ...passed 00:11:06.530 Test: blockdev write zeroes read block ...passed 00:11:06.530 Test: blockdev write zeroes read no split ...passed 00:11:06.530 Test: blockdev write zeroes read split ...passed 00:11:06.530 Test: blockdev write zeroes read split partial ...passed 00:11:06.530 Test: blockdev reset ...passed 00:11:06.530 Test: blockdev write read 8 blocks ...passed 00:11:06.530 Test: blockdev write read size > 128k ...passed 00:11:06.530 Test: blockdev write read invalid size ...passed 00:11:06.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.530 Test: blockdev write read max offset ...passed 00:11:06.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.530 Test: blockdev writev readv 8 blocks ...passed 00:11:06.530 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.530 Test: blockdev writev readv block ...passed 00:11:06.530 Test: blockdev writev readv size > 128k ...passed 00:11:06.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.530 Test: blockdev comparev and writev ...passed 00:11:06.530 Test: blockdev nvme passthru rw ...passed 00:11:06.530 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.530 Test: blockdev nvme admin passthru ...passed 00:11:06.530 Test: blockdev copy ...passed 00:11:06.530 Suite: bdevio tests on: concat0 00:11:06.530 Test: blockdev write read block ...passed 00:11:06.530 Test: blockdev write zeroes read block ...passed 00:11:06.530 Test: blockdev write zeroes read no split ...passed 00:11:06.530 Test: blockdev write zeroes read split ...passed 00:11:06.530 Test: blockdev write zeroes read split partial ...passed 00:11:06.530 Test: blockdev reset ...passed 00:11:06.530 Test: blockdev write read 8 blocks ...passed 00:11:06.530 Test: blockdev write read size > 128k ...passed 00:11:06.530 Test: blockdev write read invalid size ...passed 00:11:06.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.530 Test: blockdev write read max offset ...passed 00:11:06.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.530 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: raid0 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: TestPT 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: Malloc2p7 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: Malloc2p6 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: Malloc2p5 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: Malloc2p4 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.531 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.531 Test: blockdev writev readv block ...passed 00:11:06.531 Test: blockdev writev readv size > 128k ...passed 00:11:06.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.531 Test: blockdev comparev and writev ...passed 00:11:06.531 Test: blockdev nvme passthru rw ...passed 00:11:06.531 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.531 Test: blockdev nvme admin passthru ...passed 00:11:06.531 Test: blockdev copy ...passed 00:11:06.531 Suite: bdevio tests on: Malloc2p3 00:11:06.531 Test: blockdev write read block ...passed 00:11:06.531 Test: blockdev write zeroes read block ...passed 00:11:06.531 Test: blockdev write zeroes read no split ...passed 00:11:06.531 Test: blockdev write zeroes read split ...passed 00:11:06.531 Test: blockdev write zeroes read split partial ...passed 00:11:06.531 Test: blockdev reset ...passed 00:11:06.531 Test: blockdev write read 8 blocks ...passed 00:11:06.531 Test: blockdev write read size > 128k ...passed 00:11:06.531 Test: blockdev write read invalid size ...passed 00:11:06.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.531 Test: blockdev write read max offset ...passed 00:11:06.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.531 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc2p2 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc2p1 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc2p0 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc1p1 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc1p0 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 Suite: bdevio tests on: Malloc0 00:11:06.532 Test: blockdev write read block ...passed 00:11:06.532 Test: blockdev write zeroes read block ...passed 00:11:06.532 Test: blockdev write zeroes read no split ...passed 00:11:06.532 Test: blockdev write zeroes read split ...passed 00:11:06.532 Test: blockdev write zeroes read split partial ...passed 00:11:06.532 Test: blockdev reset ...passed 00:11:06.532 Test: blockdev write read 8 blocks ...passed 00:11:06.532 Test: blockdev write read size > 128k ...passed 00:11:06.532 Test: blockdev write read invalid size ...passed 00:11:06.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.532 Test: blockdev write read max offset ...passed 00:11:06.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.532 Test: blockdev writev readv 8 blocks ...passed 00:11:06.532 Test: blockdev writev readv 30 x 1block ...passed 00:11:06.532 Test: blockdev writev readv block ...passed 00:11:06.532 Test: blockdev writev readv size > 128k ...passed 00:11:06.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:06.532 Test: blockdev comparev and writev ...passed 00:11:06.532 Test: blockdev nvme passthru rw ...passed 00:11:06.532 Test: blockdev nvme passthru vendor specific ...passed 00:11:06.532 Test: blockdev nvme admin passthru ...passed 00:11:06.532 Test: blockdev copy ...passed 00:11:06.532 00:11:06.532 Run Summary: Type Total Ran Passed Failed Inactive 00:11:06.532 suites 16 16 n/a 0 0 00:11:06.532 tests 368 368 368 0 0 00:11:06.532 asserts 2224 2224 2224 0 n/a 00:11:06.532 00:11:06.532 Elapsed time = 0.475 seconds 00:11:06.532 0 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3320204 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3320204 ']' 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3320204 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.532 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320204 00:11:06.791 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.791 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.791 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320204' 00:11:06.791 killing process with pid 3320204 00:11:06.791 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3320204 00:11:06.791 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3320204 00:11:07.050 10:21:19 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:07.050 00:11:07.050 real 0m1.586s 00:11:07.050 user 0m4.043s 00:11:07.050 sys 0m0.466s 00:11:07.050 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.050 10:21:19 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:07.050 ************************************ 00:11:07.050 END TEST bdev_bounds 00:11:07.050 ************************************ 00:11:07.050 10:21:19 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:07.050 10:21:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:07.050 10:21:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.050 10:21:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:07.050 ************************************ 00:11:07.050 START TEST bdev_nbd 00:11:07.050 ************************************ 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:07.050 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3320506 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3320506 /var/tmp/spdk-nbd.sock 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3320506 ']' 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:07.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.051 10:21:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:07.051 [2024-07-26 10:21:19.871300] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:07.051 [2024-07-26 10:21:19.871359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:07.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:07.310 [2024-07-26 10:21:20.006581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.310 [2024-07-26 10:21:20.051620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.310 [2024-07-26 10:21:20.205348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.310 [2024-07-26 10:21:20.205406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:07.310 [2024-07-26 10:21:20.205420] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:07.600 [2024-07-26 10:21:20.213354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.600 [2024-07-26 10:21:20.213380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.600 [2024-07-26 10:21:20.221366] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.600 [2024-07-26 10:21:20.221389] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.600 [2024-07-26 10:21:20.292230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.600 [2024-07-26 10:21:20.292278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.600 [2024-07-26 10:21:20.292294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e94f0 00:11:07.600 [2024-07-26 10:21:20.292306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.600 [2024-07-26 10:21:20.293574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.600 [2024-07-26 10:21:20.293603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.170 10:21:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.170 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.171 1+0 records in 00:11:08.171 1+0 records out 00:11:08.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231389 s, 17.7 MB/s 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.171 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.429 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.430 1+0 records in 00:11:08.430 1+0 records out 00:11:08.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280947 s, 14.6 MB/s 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.430 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:08.687 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.688 1+0 records in 00:11:08.688 1+0 records out 00:11:08.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307471 s, 13.3 MB/s 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.688 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.946 1+0 records in 00:11:08.946 1+0 records out 00:11:08.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356683 s, 11.5 MB/s 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.946 10:21:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.204 1+0 records in 00:11:09.204 1+0 records out 00:11:09.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371663 s, 11.0 MB/s 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.204 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:09.461 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.462 1+0 records in 00:11:09.462 1+0 records out 00:11:09.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333322 s, 12.3 MB/s 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.462 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.719 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.977 1+0 records in 00:11:09.977 1+0 records out 00:11:09.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037085 s, 11.0 MB/s 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.978 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.236 1+0 records in 00:11:10.236 1+0 records out 00:11:10.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392153 s, 10.4 MB/s 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.236 10:21:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.494 1+0 records in 00:11:10.494 1+0 records out 00:11:10.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397442 s, 10.3 MB/s 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.494 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.751 1+0 records in 00:11:10.751 1+0 records out 00:11:10.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571693 s, 7.2 MB/s 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.751 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:11.008 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:11.008 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.009 1+0 records in 00:11:11.009 1+0 records out 00:11:11.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484842 s, 8.4 MB/s 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:11.009 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.267 1+0 records in 00:11:11.267 1+0 records out 00:11:11.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597296 s, 6.9 MB/s 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:11.267 10:21:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.267 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.267 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:11.267 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:11.267 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:11.267 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:11.524 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:11.524 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:11.524 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:11.524 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:11.524 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.525 1+0 records in 00:11:11.525 1+0 records out 00:11:11.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780512 s, 5.2 MB/s 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:11.525 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.782 1+0 records in 00:11:11.782 1+0 records out 00:11:11.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477471 s, 8.6 MB/s 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:11.782 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:11.783 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.783 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:11.783 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:11.783 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:11.783 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.040 1+0 records in 00:11:12.040 1+0 records out 00:11:12.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633235 s, 6.5 MB/s 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:12.040 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.298 1+0 records in 00:11:12.298 1+0 records out 00:11:12.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427154 s, 9.6 MB/s 00:11:12.298 10:21:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:12.298 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.555 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd0", 00:11:12.555 "bdev_name": "Malloc0" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd1", 00:11:12.555 "bdev_name": "Malloc1p0" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd2", 00:11:12.555 "bdev_name": "Malloc1p1" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd3", 00:11:12.555 "bdev_name": "Malloc2p0" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd4", 00:11:12.555 "bdev_name": "Malloc2p1" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd5", 00:11:12.555 "bdev_name": "Malloc2p2" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd6", 00:11:12.555 "bdev_name": "Malloc2p3" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd7", 00:11:12.555 "bdev_name": "Malloc2p4" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd8", 00:11:12.555 "bdev_name": "Malloc2p5" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd9", 00:11:12.555 "bdev_name": "Malloc2p6" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd10", 00:11:12.555 "bdev_name": "Malloc2p7" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd11", 00:11:12.555 "bdev_name": "TestPT" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd12", 00:11:12.555 "bdev_name": "raid0" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd13", 00:11:12.555 "bdev_name": "concat0" 00:11:12.555 }, 00:11:12.555 { 00:11:12.555 "nbd_device": "/dev/nbd14", 00:11:12.556 "bdev_name": "raid1" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd15", 00:11:12.556 "bdev_name": "AIO0" 00:11:12.556 } 00:11:12.556 ]' 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd0", 00:11:12.556 "bdev_name": "Malloc0" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd1", 00:11:12.556 "bdev_name": "Malloc1p0" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd2", 00:11:12.556 "bdev_name": "Malloc1p1" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd3", 00:11:12.556 "bdev_name": "Malloc2p0" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd4", 00:11:12.556 "bdev_name": "Malloc2p1" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd5", 00:11:12.556 "bdev_name": "Malloc2p2" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd6", 00:11:12.556 "bdev_name": "Malloc2p3" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd7", 00:11:12.556 "bdev_name": "Malloc2p4" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd8", 00:11:12.556 "bdev_name": "Malloc2p5" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd9", 00:11:12.556 "bdev_name": "Malloc2p6" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd10", 00:11:12.556 "bdev_name": "Malloc2p7" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd11", 00:11:12.556 "bdev_name": "TestPT" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd12", 00:11:12.556 "bdev_name": "raid0" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd13", 00:11:12.556 "bdev_name": "concat0" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd14", 00:11:12.556 "bdev_name": "raid1" 00:11:12.556 }, 00:11:12.556 { 00:11:12.556 "nbd_device": "/dev/nbd15", 00:11:12.556 "bdev_name": "AIO0" 00:11:12.556 } 00:11:12.556 ]' 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.556 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:12.812 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.813 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.070 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:13.329 10:21:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.329 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.587 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.588 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.846 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.105 10:21:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.364 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.623 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.883 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.142 10:21:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.401 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.660 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.919 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.177 10:21:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:16.436 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:16.437 /dev/nbd0 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.437 1+0 records in 00:11:16.437 1+0 records out 00:11:16.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260157 s, 15.7 MB/s 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.437 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:16.695 /dev/nbd1 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.695 1+0 records in 00:11:16.695 1+0 records out 00:11:16.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267782 s, 15.3 MB/s 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.695 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:16.954 /dev/nbd10 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.954 1+0 records in 00:11:16.954 1+0 records out 00:11:16.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232017 s, 17.7 MB/s 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.954 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.213 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.213 10:21:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.213 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.213 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.213 10:21:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:17.213 /dev/nbd11 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.213 1+0 records in 00:11:17.213 1+0 records out 00:11:17.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262704 s, 15.6 MB/s 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.213 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:17.472 /dev/nbd12 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.472 1+0 records in 00:11:17.472 1+0 records out 00:11:17.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312214 s, 13.1 MB/s 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.472 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:17.731 /dev/nbd13 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.731 1+0 records in 00:11:17.731 1+0 records out 00:11:17.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415174 s, 9.9 MB/s 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.731 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:17.990 /dev/nbd14 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.990 1+0 records in 00:11:17.990 1+0 records out 00:11:17.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336052 s, 12.2 MB/s 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.990 10:21:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:18.249 /dev/nbd15 00:11:18.249 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:18.249 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.250 1+0 records in 00:11:18.250 1+0 records out 00:11:18.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435735 s, 9.4 MB/s 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.250 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:18.509 /dev/nbd2 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.509 1+0 records in 00:11:18.509 1+0 records out 00:11:18.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421243 s, 9.7 MB/s 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.509 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:18.768 /dev/nbd3 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.768 1+0 records in 00:11:18.768 1+0 records out 00:11:18.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449227 s, 9.1 MB/s 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.768 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:19.028 /dev/nbd4 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.028 1+0 records in 00:11:19.028 1+0 records out 00:11:19.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053 s, 7.7 MB/s 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:19.028 10:21:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:19.287 /dev/nbd5 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.287 1+0 records in 00:11:19.287 1+0 records out 00:11:19.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446242 s, 9.2 MB/s 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:19.287 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:19.547 /dev/nbd6 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.547 1+0 records in 00:11:19.547 1+0 records out 00:11:19.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519743 s, 7.9 MB/s 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:19.547 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:19.806 /dev/nbd7 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.806 1+0 records in 00:11:19.806 1+0 records out 00:11:19.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630445 s, 6.5 MB/s 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.806 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.807 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.807 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:19.807 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:19.807 /dev/nbd8 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.116 1+0 records in 00:11:20.116 1+0 records out 00:11:20.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765601 s, 5.4 MB/s 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:20.116 /dev/nbd9 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.116 1+0 records in 00:11:20.116 1+0 records out 00:11:20.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824367 s, 5.0 MB/s 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:20.116 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:20.117 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.117 10:21:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.376 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd0", 00:11:20.376 "bdev_name": "Malloc0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd1", 00:11:20.376 "bdev_name": "Malloc1p0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd10", 00:11:20.376 "bdev_name": "Malloc1p1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd11", 00:11:20.376 "bdev_name": "Malloc2p0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd12", 00:11:20.376 "bdev_name": "Malloc2p1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd13", 00:11:20.376 "bdev_name": "Malloc2p2" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd14", 00:11:20.376 "bdev_name": "Malloc2p3" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd15", 00:11:20.376 "bdev_name": "Malloc2p4" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd2", 00:11:20.376 "bdev_name": "Malloc2p5" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd3", 00:11:20.376 "bdev_name": "Malloc2p6" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd4", 00:11:20.376 "bdev_name": "Malloc2p7" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd5", 00:11:20.376 "bdev_name": "TestPT" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd6", 00:11:20.376 "bdev_name": "raid0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd7", 00:11:20.376 "bdev_name": "concat0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd8", 00:11:20.376 "bdev_name": "raid1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd9", 00:11:20.376 "bdev_name": "AIO0" 00:11:20.376 } 00:11:20.376 ]' 00:11:20.376 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd0", 00:11:20.376 "bdev_name": "Malloc0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd1", 00:11:20.376 "bdev_name": "Malloc1p0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd10", 00:11:20.376 "bdev_name": "Malloc1p1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd11", 00:11:20.376 "bdev_name": "Malloc2p0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd12", 00:11:20.376 "bdev_name": "Malloc2p1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd13", 00:11:20.376 "bdev_name": "Malloc2p2" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd14", 00:11:20.376 "bdev_name": "Malloc2p3" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd15", 00:11:20.376 "bdev_name": "Malloc2p4" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd2", 00:11:20.376 "bdev_name": "Malloc2p5" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd3", 00:11:20.376 "bdev_name": "Malloc2p6" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd4", 00:11:20.376 "bdev_name": "Malloc2p7" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd5", 00:11:20.376 "bdev_name": "TestPT" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd6", 00:11:20.376 "bdev_name": "raid0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd7", 00:11:20.376 "bdev_name": "concat0" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd8", 00:11:20.376 "bdev_name": "raid1" 00:11:20.376 }, 00:11:20.376 { 00:11:20.376 "nbd_device": "/dev/nbd9", 00:11:20.376 "bdev_name": "AIO0" 00:11:20.376 } 00:11:20.376 ]' 00:11:20.376 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.376 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:20.376 /dev/nbd1 00:11:20.376 /dev/nbd10 00:11:20.376 /dev/nbd11 00:11:20.376 /dev/nbd12 00:11:20.376 /dev/nbd13 00:11:20.376 /dev/nbd14 00:11:20.376 /dev/nbd15 00:11:20.376 /dev/nbd2 00:11:20.376 /dev/nbd3 00:11:20.376 /dev/nbd4 00:11:20.376 /dev/nbd5 00:11:20.376 /dev/nbd6 00:11:20.376 /dev/nbd7 00:11:20.377 /dev/nbd8 00:11:20.377 /dev/nbd9' 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:20.377 /dev/nbd1 00:11:20.377 /dev/nbd10 00:11:20.377 /dev/nbd11 00:11:20.377 /dev/nbd12 00:11:20.377 /dev/nbd13 00:11:20.377 /dev/nbd14 00:11:20.377 /dev/nbd15 00:11:20.377 /dev/nbd2 00:11:20.377 /dev/nbd3 00:11:20.377 /dev/nbd4 00:11:20.377 /dev/nbd5 00:11:20.377 /dev/nbd6 00:11:20.377 /dev/nbd7 00:11:20.377 /dev/nbd8 00:11:20.377 /dev/nbd9' 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:20.377 256+0 records in 00:11:20.377 256+0 records out 00:11:20.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114694 s, 91.4 MB/s 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.377 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:20.637 256+0 records in 00:11:20.637 256+0 records out 00:11:20.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166166 s, 6.3 MB/s 00:11:20.637 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.637 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:20.895 256+0 records in 00:11:20.895 256+0 records out 00:11:20.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159963 s, 6.6 MB/s 00:11:20.895 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.895 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:20.895 256+0 records in 00:11:20.895 256+0 records out 00:11:20.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154696 s, 6.8 MB/s 00:11:20.895 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.895 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:21.153 256+0 records in 00:11:21.153 256+0 records out 00:11:21.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167899 s, 6.2 MB/s 00:11:21.153 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.153 10:21:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:21.153 256+0 records in 00:11:21.153 256+0 records out 00:11:21.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0904754 s, 11.6 MB/s 00:11:21.153 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.153 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:21.411 256+0 records in 00:11:21.411 256+0 records out 00:11:21.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0964387 s, 10.9 MB/s 00:11:21.411 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.411 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:21.411 256+0 records in 00:11:21.411 256+0 records out 00:11:21.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0894611 s, 11.7 MB/s 00:11:21.411 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.411 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:21.668 256+0 records in 00:11:21.668 256+0 records out 00:11:21.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163443 s, 6.4 MB/s 00:11:21.668 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.668 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:21.669 256+0 records in 00:11:21.669 256+0 records out 00:11:21.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102253 s, 10.3 MB/s 00:11:21.669 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.669 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:21.927 256+0 records in 00:11:21.927 256+0 records out 00:11:21.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0944779 s, 11.1 MB/s 00:11:21.927 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.927 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:21.927 256+0 records in 00:11:21.927 256+0 records out 00:11:21.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167596 s, 6.3 MB/s 00:11:21.927 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.927 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:22.184 256+0 records in 00:11:22.184 256+0 records out 00:11:22.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.097725 s, 10.7 MB/s 00:11:22.184 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.184 10:21:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:22.184 256+0 records in 00:11:22.184 256+0 records out 00:11:22.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16873 s, 6.2 MB/s 00:11:22.184 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.184 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:22.442 256+0 records in 00:11:22.442 256+0 records out 00:11:22.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168738 s, 6.2 MB/s 00:11:22.442 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.442 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:22.442 256+0 records in 00:11:22.442 256+0 records out 00:11:22.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106402 s, 9.9 MB/s 00:11:22.442 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:22.442 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:22.700 256+0 records in 00:11:22.700 256+0 records out 00:11:22.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166094 s, 6.3 MB/s 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.700 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.959 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.218 10:21:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:23.219 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.477 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.735 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.992 10:21:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.250 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.508 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.766 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.024 10:21:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.282 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.541 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.800 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.059 10:21:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.318 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:26.575 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:26.833 malloc_lvol_verify 00:11:26.833 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:27.091 7b5b4631-12ea-4a9a-892f-16ac0d678591 00:11:27.091 10:21:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:27.348 725476e5-73bd-4fe3-beea-d6ec1e3bf708 00:11:27.348 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:27.606 /dev/nbd0 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:27.606 mke2fs 1.46.5 (30-Dec-2021) 00:11:27.606 Discarding device blocks: 0/4096 done 00:11:27.606 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:27.606 00:11:27.606 Allocating group tables: 0/1 done 00:11:27.606 Writing inode tables: 0/1 done 00:11:27.606 Creating journal (1024 blocks): done 00:11:27.606 Writing superblocks and filesystem accounting information: 0/1 done 00:11:27.606 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.606 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3320506 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3320506 ']' 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3320506 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3320506 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3320506' 00:11:27.864 killing process with pid 3320506 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3320506 00:11:27.864 10:21:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3320506 00:11:28.430 10:21:41 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:28.430 00:11:28.430 real 0m21.327s 00:11:28.430 user 0m26.028s 00:11:28.430 sys 0m12.376s 00:11:28.430 10:21:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.430 10:21:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:28.430 ************************************ 00:11:28.430 END TEST bdev_nbd 00:11:28.430 ************************************ 00:11:28.430 10:21:41 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:28.430 10:21:41 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:11:28.430 10:21:41 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:11:28.430 10:21:41 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:11:28.430 10:21:41 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.430 10:21:41 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.431 10:21:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:28.431 ************************************ 00:11:28.431 START TEST bdev_fio 00:11:28.431 ************************************ 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:28.431 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.431 10:21:41 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:28.431 ************************************ 00:11:28.431 START TEST bdev_fio_rw_verify 00:11:28.431 ************************************ 00:11:28.431 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:28.690 10:21:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:28.948 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.948 fio-3.35 00:11:28.948 Starting 16 threads 00:11:28.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.206 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:29.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.207 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:41.425 00:11:41.425 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=3325046: Fri Jul 26 10:21:52 2024 00:11:41.425 read: IOPS=91.9k, BW=359MiB/s (376MB/s)(3590MiB/10001msec) 00:11:41.425 slat (usec): min=2, max=237, avg=35.33, stdev=15.20 00:11:41.425 clat (usec): min=10, max=1671, avg=285.56, stdev=133.99 00:11:41.425 lat (usec): min=23, max=1762, avg=320.89, stdev=142.46 00:11:41.425 clat percentiles (usec): 00:11:41.425 | 50.000th=[ 281], 99.000th=[ 603], 99.900th=[ 832], 99.990th=[ 1057], 00:11:41.425 | 99.999th=[ 1582] 00:11:41.425 write: IOPS=144k, BW=563MiB/s (590MB/s)(5547MiB/9851msec); 0 zone resets 00:11:41.425 slat (usec): min=7, max=4329, avg=47.91, stdev=17.19 00:11:41.425 clat (usec): min=12, max=4990, avg=336.31, stdev=161.60 00:11:41.425 lat (usec): min=32, max=5035, avg=384.22, stdev=171.19 00:11:41.425 clat percentiles (usec): 00:11:41.425 | 50.000th=[ 322], 99.000th=[ 725], 99.900th=[ 1418], 99.990th=[ 1582], 00:11:41.425 | 99.999th=[ 1663] 00:11:41.425 bw ( KiB/s): min=487557, max=692624, per=98.76%, avg=569463.68, stdev=3515.67, samples=304 00:11:41.425 iops : min=121888, max=173152, avg=142364.58, stdev=878.90, samples=304 00:11:41.425 lat (usec) : 20=0.01%, 50=0.63%, 100=4.87%, 250=31.87%, 500=51.89% 00:11:41.425 lat (usec) : 750=10.05%, 1000=0.32% 00:11:41.425 lat (msec) : 2=0.37%, 10=0.01% 00:11:41.425 cpu : usr=99.25%, sys=0.37%, ctx=664, majf=0, minf=2669 00:11:41.425 IO depths : 1=12.4%, 2=24.8%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.425 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.425 issued rwts: total=919033,1420037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:41.426 00:11:41.426 Run status group 0 (all jobs): 00:11:41.426 READ: bw=359MiB/s (376MB/s), 359MiB/s-359MiB/s (376MB/s-376MB/s), io=3590MiB (3764MB), run=10001-10001msec 00:11:41.426 WRITE: bw=563MiB/s (590MB/s), 563MiB/s-563MiB/s (590MB/s-590MB/s), io=5547MiB (5816MB), run=9851-9851msec 00:11:41.426 00:11:41.426 real 0m11.587s 00:11:41.426 user 2m53.514s 00:11:41.426 sys 0m1.348s 00:11:41.426 10:21:52 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.426 10:21:52 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:41.426 ************************************ 00:11:41.426 END TEST bdev_fio_rw_verify 00:11:41.426 ************************************ 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:11:41.426 10:21:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:41.427 10:21:52 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f1408f8b-e0d3-4342-8ebe-1c7883c05890"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f1408f8b-e0d3-4342-8ebe-1c7883c05890",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "092fde67-b181-50c0-a3ef-ee7712e0a190"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "092fde67-b181-50c0-a3ef-ee7712e0a190",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f770f83c-599b-5499-8d09-8b0d90b6ece4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f770f83c-599b-5499-8d09-8b0d90b6ece4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b368092e-6227-5904-acc3-ed0ac227b81c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b368092e-6227-5904-acc3-ed0ac227b81c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9152497e-d853-55cf-822c-1fce7d526151"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9152497e-d853-55cf-822c-1fce7d526151",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0462d91b-1306-58f5-b8f2-0504336219c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0462d91b-1306-58f5-b8f2-0504336219c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "186032fb-bebb-5b38-8013-15fdd909ffb5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "186032fb-bebb-5b38-8013-15fdd909ffb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8e99edfa-2f25-561f-b861-6fe3cb007622"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8e99edfa-2f25-561f-b861-6fe3cb007622",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c303dae3-1fd2-58e8-9033-ecfd69d6e23a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c303dae3-1fd2-58e8-9033-ecfd69d6e23a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "669c3fe8-f6cc-55c0-bec7-6f2558796193"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "669c3fe8-f6cc-55c0-bec7-6f2558796193",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e1d078ae-2756-46f4-9346-b442832da487"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c025866-7973-4d11-9f4d-94fdd79e8cb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "166c2a4d-7298-4d99-8c89-a4ad4553ce99",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bea468bc-9407-4fb2-aa26-9b4679d4a53f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "130c28f9-9fd8-4738-afff-7cd41a6dd635",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "960fd66e-5cd9-4291-b11f-97ef087f60d2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96623534-3630-4468-98e2-5e9248c8a394"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "29c8dd47-c559-4452-89d1-e2aa7974b9f0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "928139da-84d0-4766-a864-cc4ca14dc303",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c1c421d4-ed59-45a3-b2b4-a65b554bacb6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c1c421d4-ed59-45a3-b2b4-a65b554bacb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:41.427 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:11:41.427 Malloc1p0 00:11:41.427 Malloc1p1 00:11:41.427 Malloc2p0 00:11:41.427 Malloc2p1 00:11:41.427 Malloc2p2 00:11:41.427 Malloc2p3 00:11:41.427 Malloc2p4 00:11:41.427 Malloc2p5 00:11:41.427 Malloc2p6 00:11:41.427 Malloc2p7 00:11:41.427 TestPT 00:11:41.427 raid0 00:11:41.427 concat0 ]] 00:11:41.427 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "f1408f8b-e0d3-4342-8ebe-1c7883c05890"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f1408f8b-e0d3-4342-8ebe-1c7883c05890",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "092fde67-b181-50c0-a3ef-ee7712e0a190"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "092fde67-b181-50c0-a3ef-ee7712e0a190",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "f770f83c-599b-5499-8d09-8b0d90b6ece4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f770f83c-599b-5499-8d09-8b0d90b6ece4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b368092e-6227-5904-acc3-ed0ac227b81c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b368092e-6227-5904-acc3-ed0ac227b81c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9152497e-d853-55cf-822c-1fce7d526151"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9152497e-d853-55cf-822c-1fce7d526151",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "0462d91b-1306-58f5-b8f2-0504336219c3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0462d91b-1306-58f5-b8f2-0504336219c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "72b52fa1-a3b1-5a1a-a2a6-59fa5fe7954b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "186032fb-bebb-5b38-8013-15fdd909ffb5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "186032fb-bebb-5b38-8013-15fdd909ffb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8daea17b-dd53-5fcb-b896-20c6b1ec1ceb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8e99edfa-2f25-561f-b861-6fe3cb007622"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8e99edfa-2f25-561f-b861-6fe3cb007622",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "c303dae3-1fd2-58e8-9033-ecfd69d6e23a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c303dae3-1fd2-58e8-9033-ecfd69d6e23a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "669c3fe8-f6cc-55c0-bec7-6f2558796193"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "669c3fe8-f6cc-55c0-bec7-6f2558796193",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e1d078ae-2756-46f4-9346-b442832da487"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e1d078ae-2756-46f4-9346-b442832da487",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2c025866-7973-4d11-9f4d-94fdd79e8cb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "166c2a4d-7298-4d99-8c89-a4ad4553ce99",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bea468bc-9407-4fb2-aa26-9b4679d4a53f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bea468bc-9407-4fb2-aa26-9b4679d4a53f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "130c28f9-9fd8-4738-afff-7cd41a6dd635",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "960fd66e-5cd9-4291-b11f-97ef087f60d2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "96623534-3630-4468-98e2-5e9248c8a394"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "96623534-3630-4468-98e2-5e9248c8a394",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "29c8dd47-c559-4452-89d1-e2aa7974b9f0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "928139da-84d0-4766-a864-cc4ca14dc303",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c1c421d4-ed59-45a3-b2b4-a65b554bacb6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c1c421d4-ed59-45a3-b2b4-a65b554bacb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.429 10:21:53 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:41.429 ************************************ 00:11:41.429 START TEST bdev_fio_trim 00:11:41.429 ************************************ 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:41.429 10:21:53 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:41.429 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.429 fio-3.35 00:11:41.429 Starting 14 threads 00:11:41.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.429 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:41.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.429 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:41.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.429 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:41.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.430 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:51.416 00:11:51.416 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=3327199: Fri Jul 26 10:22:04 2024 00:11:51.416 write: IOPS=157k, BW=612MiB/s (642MB/s)(6125MiB/10001msec); 0 zone resets 00:11:51.416 slat (usec): min=3, max=3390, avg=31.68, stdev= 9.95 00:11:51.416 clat (usec): min=21, max=3597, avg=223.66, stdev=81.31 00:11:51.416 lat (usec): min=49, max=3624, avg=255.34, stdev=85.33 00:11:51.416 clat percentiles (usec): 00:11:51.416 | 50.000th=[ 217], 99.000th=[ 420], 99.900th=[ 545], 99.990th=[ 652], 00:11:51.416 | 99.999th=[ 971] 00:11:51.416 bw ( KiB/s): min=510272, max=774384, per=100.00%, avg=632226.11, stdev=7666.13, samples=266 00:11:51.416 iops : min=127570, max=193596, avg=158056.68, stdev=1916.53, samples=266 00:11:51.416 trim: IOPS=157k, BW=612MiB/s (642MB/s)(6125MiB/10001msec); 0 zone resets 00:11:51.416 slat (usec): min=4, max=863, avg=21.91, stdev= 6.62 00:11:51.416 clat (usec): min=6, max=3624, avg=249.99, stdev=89.16 00:11:51.416 lat (usec): min=18, max=3639, avg=271.90, stdev=92.28 00:11:51.416 clat percentiles (usec): 00:11:51.416 | 50.000th=[ 241], 99.000th=[ 465], 99.900th=[ 603], 99.990th=[ 717], 00:11:51.416 | 99.999th=[ 1057] 00:11:51.416 bw ( KiB/s): min=510280, max=774376, per=100.00%, avg=632227.37, stdev=7666.11, samples=266 00:11:51.416 iops : min=127572, max=193594, avg=158056.79, stdev=1916.52, samples=266 00:11:51.416 lat (usec) : 10=0.01%, 20=0.02%, 50=0.23%, 100=2.35%, 250=57.07% 00:11:51.416 lat (usec) : 500=40.02%, 750=0.32%, 1000=0.01% 00:11:51.416 lat (msec) : 2=0.01%, 4=0.01% 00:11:51.416 cpu : usr=99.64%, sys=0.00%, ctx=530, majf=0, minf=1008 00:11:51.416 IO depths : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.416 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.416 issued rwts: total=0,1568087,1568093,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.416 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:51.416 00:11:51.416 Run status group 0 (all jobs): 00:11:51.416 WRITE: bw=612MiB/s (642MB/s), 612MiB/s-612MiB/s (642MB/s-642MB/s), io=6125MiB (6423MB), run=10001-10001msec 00:11:51.416 TRIM: bw=612MiB/s (642MB/s), 612MiB/s-612MiB/s (642MB/s-642MB/s), io=6125MiB (6423MB), run=10001-10001msec 00:11:51.675 00:11:51.675 real 0m11.418s 00:11:51.675 user 2m32.635s 00:11:51.675 sys 0m0.978s 00:11:51.675 10:22:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.675 10:22:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:11:51.675 ************************************ 00:11:51.675 END TEST bdev_fio_trim 00:11:51.675 ************************************ 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:11:51.934 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:11:51.934 00:11:51.934 real 0m23.372s 00:11:51.934 user 5m26.348s 00:11:51.934 sys 0m2.525s 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.934 10:22:04 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 ************************************ 00:11:51.934 END TEST bdev_fio 00:11:51.934 ************************************ 00:11:51.934 10:22:04 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:51.934 10:22:04 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:51.934 10:22:04 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:51.934 10:22:04 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.934 10:22:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 ************************************ 00:11:51.934 START TEST bdev_verify 00:11:51.934 ************************************ 00:11:51.934 10:22:04 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:51.934 [2024-07-26 10:22:04.717784] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:51.934 [2024-07-26 10:22:04.717838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329084 ] 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:51.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.934 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:51.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:51.935 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:52.194 [2024-07-26 10:22:04.849807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:52.194 [2024-07-26 10:22:04.894529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.194 [2024-07-26 10:22:04.894534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.194 [2024-07-26 10:22:05.044394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.194 [2024-07-26 10:22:05.044449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:52.194 [2024-07-26 10:22:05.044462] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:52.194 [2024-07-26 10:22:05.052405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.194 [2024-07-26 10:22:05.052429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.194 [2024-07-26 10:22:05.060416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.194 [2024-07-26 10:22:05.060438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.453 [2024-07-26 10:22:05.131469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.453 [2024-07-26 10:22:05.131517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.453 [2024-07-26 10:22:05.131533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29bbc40 00:11:52.453 [2024-07-26 10:22:05.131545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.453 [2024-07-26 10:22:05.132986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.453 [2024-07-26 10:22:05.133014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:52.712 Running I/O for 5 seconds... 00:11:57.987 00:11:57.987 Latency(us) 00:11:57.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.987 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x1000 00:11:57.987 Malloc0 : 5.19 1183.37 4.62 0.00 0.00 107999.42 439.09 370776.47 00:11:57.987 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x1000 length 0x1000 00:11:57.987 Malloc0 : 5.25 1194.31 4.67 0.00 0.00 102773.91 573.44 210554.06 00:11:57.987 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x800 00:11:57.987 Malloc1p0 : 5.19 616.01 2.41 0.00 0.00 207031.27 3224.37 204682.04 00:11:57.987 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x800 length 0x800 00:11:57.987 Malloc1p0 : 5.18 617.93 2.41 0.00 0.00 206870.75 2149.58 219781.53 00:11:57.987 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x800 00:11:57.987 Malloc1p1 : 5.20 615.69 2.41 0.00 0.00 206554.37 3185.05 199648.87 00:11:57.987 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x800 length 0x800 00:11:57.987 Malloc1p1 : 5.18 617.68 2.41 0.00 0.00 206439.94 3185.05 214748.36 00:11:57.987 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p0 : 5.20 615.36 2.40 0.00 0.00 206085.38 3145.73 196293.43 00:11:57.987 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p0 : 5.18 617.43 2.41 0.00 0.00 205925.14 3198.16 210554.06 00:11:57.987 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p1 : 5.20 615.03 2.40 0.00 0.00 205601.18 3132.62 190421.40 00:11:57.987 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p1 : 5.18 617.18 2.41 0.00 0.00 205433.47 3158.84 205520.90 00:11:57.987 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p2 : 5.20 614.80 2.40 0.00 0.00 205079.02 3145.73 185388.24 00:11:57.987 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p2 : 5.19 616.93 2.41 0.00 0.00 204916.46 3106.41 200487.73 00:11:57.987 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p3 : 5.21 614.57 2.40 0.00 0.00 204515.90 3263.69 179516.21 00:11:57.987 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p3 : 5.19 616.68 2.41 0.00 0.00 204398.55 3145.73 195454.57 00:11:57.987 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p4 : 5.21 614.35 2.40 0.00 0.00 203973.69 3067.08 173644.19 00:11:57.987 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p4 : 5.19 616.42 2.41 0.00 0.00 203860.16 3263.69 188743.68 00:11:57.987 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p5 : 5.21 614.12 2.40 0.00 0.00 203434.92 3276.80 166933.30 00:11:57.987 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p5 : 5.19 616.08 2.41 0.00 0.00 203334.99 3053.98 182871.65 00:11:57.987 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p6 : 5.21 613.89 2.40 0.00 0.00 202887.87 3171.94 163577.86 00:11:57.987 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p6 : 5.20 615.76 2.41 0.00 0.00 202824.73 3263.69 178677.35 00:11:57.987 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x200 00:11:57.987 Malloc2p7 : 5.21 613.66 2.40 0.00 0.00 202399.18 2057.83 165255.58 00:11:57.987 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x200 length 0x200 00:11:57.987 Malloc2p7 : 5.20 615.44 2.40 0.00 0.00 202318.44 3158.84 176999.63 00:11:57.987 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x1000 00:11:57.987 TestPT : 5.22 613.43 2.40 0.00 0.00 202032.80 2791.83 159383.55 00:11:57.987 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x1000 length 0x1000 00:11:57.987 TestPT : 5.24 610.87 2.39 0.00 0.00 203056.30 10905.19 175321.91 00:11:57.987 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x2000 00:11:57.987 raid0 : 5.22 613.20 2.40 0.00 0.00 201569.72 2857.37 157705.83 00:11:57.987 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x2000 length 0x2000 00:11:57.987 raid0 : 5.25 633.42 2.47 0.00 0.00 195546.72 2844.26 166094.44 00:11:57.987 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.987 Verification LBA range: start 0x0 length 0x2000 00:11:57.987 concat0 : 5.25 633.45 2.47 0.00 0.00 194678.71 2660.76 159383.55 00:11:57.988 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.988 Verification LBA range: start 0x2000 length 0x2000 00:11:57.988 concat0 : 5.26 632.86 2.47 0.00 0.00 195223.69 2844.26 164416.72 00:11:57.988 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.988 Verification LBA range: start 0x0 length 0x1000 00:11:57.988 raid1 : 5.26 632.88 2.47 0.00 0.00 194451.09 3053.98 167772.16 00:11:57.988 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.988 Verification LBA range: start 0x1000 length 0x1000 00:11:57.988 raid1 : 5.26 632.33 2.47 0.00 0.00 194937.31 3211.26 161061.27 00:11:57.988 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:57.988 Verification LBA range: start 0x0 length 0x4e2 00:11:57.988 AIO0 : 5.26 632.40 2.47 0.00 0.00 194138.78 1081.34 171966.46 00:11:57.988 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:57.988 Verification LBA range: start 0x4e2 length 0x4e2 00:11:57.988 AIO0 : 5.27 632.06 2.47 0.00 0.00 194599.43 1513.88 165255.58 00:11:57.988 =================================================================================================================== 00:11:57.988 Total : 20959.62 81.87 0.00 0.00 191084.86 439.09 370776.47 00:11:58.247 00:11:58.247 real 0m6.383s 00:11:58.247 user 0m11.881s 00:11:58.247 sys 0m0.367s 00:11:58.247 10:22:11 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.247 10:22:11 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:58.247 ************************************ 00:11:58.247 END TEST bdev_verify 00:11:58.247 ************************************ 00:11:58.247 10:22:11 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:58.247 10:22:11 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:58.247 10:22:11 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.247 10:22:11 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:58.247 ************************************ 00:11:58.247 START TEST bdev_verify_big_io 00:11:58.247 ************************************ 00:11:58.247 10:22:11 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:58.506 [2024-07-26 10:22:11.177108] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:11:58.506 [2024-07-26 10:22:11.177169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330158 ] 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.506 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:58.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.507 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:58.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.507 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:58.507 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:58.507 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:58.507 [2024-07-26 10:22:11.309231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.507 [2024-07-26 10:22:11.353801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.507 [2024-07-26 10:22:11.353806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.766 [2024-07-26 10:22:11.487008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:58.766 [2024-07-26 10:22:11.487064] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:58.766 [2024-07-26 10:22:11.487077] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:58.766 [2024-07-26 10:22:11.495022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:58.766 [2024-07-26 10:22:11.495046] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:58.766 [2024-07-26 10:22:11.503037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:58.766 [2024-07-26 10:22:11.503059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:58.766 [2024-07-26 10:22:11.574014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:58.766 [2024-07-26 10:22:11.574060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.766 [2024-07-26 10:22:11.574076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b67c40 00:11:58.766 [2024-07-26 10:22:11.574088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.766 [2024-07-26 10:22:11.575536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.766 [2024-07-26 10:22:11.575563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:59.026 [2024-07-26 10:22:11.731228] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.732222] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.733713] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.734819] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.736504] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.737669] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.739346] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.741009] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.742185] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.743798] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.744676] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.746018] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.746900] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:59.026 [2024-07-26 10:22:11.748243] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:59.027 [2024-07-26 10:22:11.749126] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:59.027 [2024-07-26 10:22:11.750489] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:59.027 [2024-07-26 10:22:11.773445] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:59.027 [2024-07-26 10:22:11.775357] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:59.027 Running I/O for 5 seconds... 00:12:07.142 00:12:07.142 Latency(us) 00:12:07.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.143 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x100 00:12:07.143 Malloc0 : 5.67 158.07 9.88 0.00 0.00 794003.80 825.75 2214592.51 00:12:07.143 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x100 length 0x100 00:12:07.143 Malloc0 : 5.91 151.54 9.47 0.00 0.00 828808.92 829.03 2550136.83 00:12:07.143 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x80 00:12:07.143 Malloc1p0 : 6.17 70.71 4.42 0.00 0.00 1652888.52 2988.44 2630667.47 00:12:07.143 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x80 length 0x80 00:12:07.143 Malloc1p0 : 6.33 60.62 3.79 0.00 0.00 1955157.32 2752.51 2966211.79 00:12:07.143 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x80 00:12:07.143 Malloc1p1 : 6.61 38.74 2.42 0.00 0.00 2923285.29 1376.26 5127117.21 00:12:07.143 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x80 length 0x80 00:12:07.143 Malloc1p1 : 6.64 38.58 2.41 0.00 0.00 2921587.19 1376.26 5046586.57 00:12:07.143 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p0 : 6.17 25.93 1.62 0.00 0.00 1092102.24 583.27 1879048.19 00:12:07.143 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p0 : 6.16 25.96 1.62 0.00 0.00 1083466.31 596.38 1664299.83 00:12:07.143 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p1 : 6.17 25.93 1.62 0.00 0.00 1082772.49 576.72 1852204.65 00:12:07.143 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p1 : 6.17 25.95 1.62 0.00 0.00 1074195.77 586.55 1637456.28 00:12:07.143 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p2 : 6.17 25.92 1.62 0.00 0.00 1073761.60 573.44 1838782.87 00:12:07.143 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p2 : 6.26 28.13 1.76 0.00 0.00 998828.40 586.55 1617323.62 00:12:07.143 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p3 : 6.17 25.92 1.62 0.00 0.00 1064653.26 589.82 1798517.56 00:12:07.143 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p3 : 6.26 28.13 1.76 0.00 0.00 990535.15 593.10 1590480.08 00:12:07.143 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p4 : 6.17 25.91 1.62 0.00 0.00 1054796.69 570.16 1771674.01 00:12:07.143 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p4 : 6.26 28.12 1.76 0.00 0.00 981884.16 589.82 1570347.42 00:12:07.143 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p5 : 6.26 28.12 1.76 0.00 0.00 975267.50 579.99 1758252.24 00:12:07.143 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p5 : 6.26 28.12 1.76 0.00 0.00 973461.77 583.27 1543503.87 00:12:07.143 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p6 : 6.26 28.11 1.76 0.00 0.00 967048.85 596.38 1731408.69 00:12:07.143 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p6 : 6.26 28.11 1.76 0.00 0.00 965711.77 583.27 1516660.33 00:12:07.143 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x20 00:12:07.143 Malloc2p7 : 6.26 28.10 1.76 0.00 0.00 959035.77 576.72 1704565.15 00:12:07.143 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x20 length 0x20 00:12:07.143 Malloc2p7 : 6.26 28.09 1.76 0.00 0.00 957420.91 589.82 1496527.67 00:12:07.143 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x100 00:12:07.143 TestPT : 6.74 38.27 2.39 0.00 0.00 2669288.39 103599.31 3946001.20 00:12:07.143 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x100 length 0x100 00:12:07.143 TestPT : 6.48 37.02 2.31 0.00 0.00 2807924.00 94371.84 3731252.84 00:12:07.143 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x200 00:12:07.143 raid0 : 6.52 44.17 2.76 0.00 0.00 2288846.18 1435.24 4590246.30 00:12:07.143 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x200 length 0x200 00:12:07.143 raid0 : 6.48 44.41 2.78 0.00 0.00 2270646.65 1448.35 4482872.12 00:12:07.143 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x200 00:12:07.143 concat0 : 6.61 53.25 3.33 0.00 0.00 1842975.94 1428.68 4402341.48 00:12:07.143 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x200 length 0x200 00:12:07.143 concat0 : 6.64 53.02 3.31 0.00 0.00 1852299.20 1461.45 4321810.84 00:12:07.143 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x100 00:12:07.143 raid1 : 6.79 63.65 3.98 0.00 0.00 1519761.97 1874.33 4241280.20 00:12:07.143 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x100 length 0x100 00:12:07.143 raid1 : 6.80 63.56 3.97 0.00 0.00 1520280.78 1848.12 4160749.57 00:12:07.143 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x0 length 0x4e 00:12:07.143 AIO0 : 6.79 60.68 3.79 0.00 0.00 947284.80 563.61 2670932.79 00:12:07.143 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:07.143 Verification LBA range: start 0x4e length 0x4e 00:12:07.143 AIO0 : 6.80 63.54 3.97 0.00 0.00 902509.91 586.55 2402497.33 00:12:07.143 =================================================================================================================== 00:12:07.143 Total : 1474.38 92.15 0.00 0.00 1421427.78 563.61 5127117.21 00:12:07.143 00:12:07.143 real 0m7.900s 00:12:07.143 user 0m14.954s 00:12:07.143 sys 0m0.386s 00:12:07.143 10:22:19 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.143 10:22:19 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.143 ************************************ 00:12:07.143 END TEST bdev_verify_big_io 00:12:07.143 ************************************ 00:12:07.143 10:22:19 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:07.143 10:22:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:07.143 10:22:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.143 10:22:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:07.143 ************************************ 00:12:07.143 START TEST bdev_write_zeroes 00:12:07.143 ************************************ 00:12:07.143 10:22:19 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:07.143 [2024-07-26 10:22:19.201124] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:07.143 [2024-07-26 10:22:19.201258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331498 ] 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.143 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.143 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.143 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.143 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.143 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:07.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:07.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.144 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:07.144 [2024-07-26 10:22:19.416803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.144 [2024-07-26 10:22:19.465708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.144 [2024-07-26 10:22:19.601199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:07.144 [2024-07-26 10:22:19.601254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:07.144 [2024-07-26 10:22:19.601268] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:07.144 [2024-07-26 10:22:19.609200] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:07.144 [2024-07-26 10:22:19.609225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:07.144 [2024-07-26 10:22:19.617210] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:07.144 [2024-07-26 10:22:19.617233] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:07.144 [2024-07-26 10:22:19.688465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:07.144 [2024-07-26 10:22:19.688512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.144 [2024-07-26 10:22:19.688529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2933cf0 00:12:07.144 [2024-07-26 10:22:19.688540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.144 [2024-07-26 10:22:19.689786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.144 [2024-07-26 10:22:19.689813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:07.144 Running I/O for 1 seconds... 00:12:08.081 00:12:08.081 Latency(us) 00:12:08.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.081 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc0 : 1.04 5393.31 21.07 0.00 0.00 23711.67 612.76 39636.17 00:12:08.081 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc1p0 : 1.05 5385.77 21.04 0.00 0.00 23705.81 832.31 38797.31 00:12:08.081 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc1p1 : 1.05 5378.72 21.01 0.00 0.00 23688.83 845.41 37958.45 00:12:08.081 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p0 : 1.05 5371.64 20.98 0.00 0.00 23672.03 835.58 37119.59 00:12:08.081 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p1 : 1.05 5364.65 20.96 0.00 0.00 23655.33 832.31 36490.44 00:12:08.081 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p2 : 1.05 5357.67 20.93 0.00 0.00 23642.84 832.31 35651.58 00:12:08.081 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p3 : 1.05 5350.65 20.90 0.00 0.00 23624.77 835.58 34812.72 00:12:08.081 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p4 : 1.05 5343.70 20.87 0.00 0.00 23606.66 838.86 33973.86 00:12:08.081 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p5 : 1.06 5336.79 20.85 0.00 0.00 23589.94 822.48 33135.00 00:12:08.081 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p6 : 1.06 5329.83 20.82 0.00 0.00 23572.86 829.03 32296.14 00:12:08.081 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 Malloc2p7 : 1.06 5322.93 20.79 0.00 0.00 23554.97 829.03 31457.28 00:12:08.081 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 TestPT : 1.06 5316.07 20.77 0.00 0.00 23536.29 865.08 30618.42 00:12:08.081 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 raid0 : 1.06 5308.11 20.73 0.00 0.00 23513.36 1494.22 29150.41 00:12:08.081 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 concat0 : 1.06 5300.37 20.70 0.00 0.00 23465.55 1481.11 27682.41 00:12:08.081 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 raid1 : 1.06 5290.66 20.67 0.00 0.00 23414.93 2372.40 25270.68 00:12:08.081 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:08.081 AIO0 : 1.07 5284.75 20.64 0.00 0.00 23333.15 950.27 24222.11 00:12:08.081 =================================================================================================================== 00:12:08.081 Total : 85435.62 333.73 0.00 0.00 23580.56 612.76 39636.17 00:12:08.648 00:12:08.648 real 0m2.172s 00:12:08.648 user 0m1.707s 00:12:08.648 sys 0m0.407s 00:12:08.648 10:22:21 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.648 10:22:21 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:08.648 ************************************ 00:12:08.648 END TEST bdev_write_zeroes 00:12:08.648 ************************************ 00:12:08.648 10:22:21 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:08.648 10:22:21 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:08.648 10:22:21 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.648 10:22:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:08.648 ************************************ 00:12:08.648 START TEST bdev_json_nonenclosed 00:12:08.648 ************************************ 00:12:08.649 10:22:21 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:08.649 [2024-07-26 10:22:21.458465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:08.649 [2024-07-26 10:22:21.458586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331857 ] 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:08.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:08.939 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:08.939 [2024-07-26 10:22:21.674192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.939 [2024-07-26 10:22:21.717782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.939 [2024-07-26 10:22:21.717848] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:08.939 [2024-07-26 10:22:21.717864] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:08.939 [2024-07-26 10:22:21.717875] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:08.939 00:12:08.939 real 0m0.436s 00:12:08.939 user 0m0.206s 00:12:08.939 sys 0m0.225s 00:12:08.939 10:22:21 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.939 10:22:21 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:08.939 ************************************ 00:12:08.939 END TEST bdev_json_nonenclosed 00:12:08.939 ************************************ 00:12:09.198 10:22:21 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:09.198 10:22:21 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:09.198 10:22:21 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.198 10:22:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.198 ************************************ 00:12:09.198 START TEST bdev_json_nonarray 00:12:09.198 ************************************ 00:12:09.198 10:22:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:09.198 [2024-07-26 10:22:21.935520] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:09.198 [2024-07-26 10:22:21.935574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332065 ] 00:12:09.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.198 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:09.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:09.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.199 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:09.199 [2024-07-26 10:22:22.068637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.458 [2024-07-26 10:22:22.112084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.458 [2024-07-26 10:22:22.112156] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:09.458 [2024-07-26 10:22:22.112172] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:09.458 [2024-07-26 10:22:22.112183] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.458 00:12:09.458 real 0m0.306s 00:12:09.458 user 0m0.163s 00:12:09.458 sys 0m0.142s 00:12:09.458 10:22:22 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.458 10:22:22 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 ************************************ 00:12:09.458 END TEST bdev_json_nonarray 00:12:09.458 ************************************ 00:12:09.458 10:22:22 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:12:09.458 10:22:22 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:12:09.458 10:22:22 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.458 10:22:22 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.458 10:22:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 ************************************ 00:12:09.458 START TEST bdev_qos 00:12:09.458 ************************************ 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=3332088 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 3332088' 00:12:09.458 Process qos testing pid: 3332088 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 3332088 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 3332088 ']' 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 10:22:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:09.458 [2024-07-26 10:22:22.321421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:09.458 [2024-07-26 10:22:22.321475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332088 ] 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:09.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:09.719 [2024-07-26 10:22:22.442684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.719 [2024-07-26 10:22:22.487528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 [ 00:12:10.660 { 00:12:10.660 "name": "Malloc_0", 00:12:10.660 "aliases": [ 00:12:10.660 "6030b68d-120a-423a-a296-92f265b9567e" 00:12:10.660 ], 00:12:10.660 "product_name": "Malloc disk", 00:12:10.660 "block_size": 512, 00:12:10.660 "num_blocks": 262144, 00:12:10.660 "uuid": "6030b68d-120a-423a-a296-92f265b9567e", 00:12:10.660 "assigned_rate_limits": { 00:12:10.660 "rw_ios_per_sec": 0, 00:12:10.660 "rw_mbytes_per_sec": 0, 00:12:10.660 "r_mbytes_per_sec": 0, 00:12:10.660 "w_mbytes_per_sec": 0 00:12:10.660 }, 00:12:10.660 "claimed": false, 00:12:10.660 "zoned": false, 00:12:10.660 "supported_io_types": { 00:12:10.660 "read": true, 00:12:10.660 "write": true, 00:12:10.660 "unmap": true, 00:12:10.660 "flush": true, 00:12:10.660 "reset": true, 00:12:10.660 "nvme_admin": false, 00:12:10.660 "nvme_io": false, 00:12:10.660 "nvme_io_md": false, 00:12:10.660 "write_zeroes": true, 00:12:10.660 "zcopy": true, 00:12:10.660 "get_zone_info": false, 00:12:10.660 "zone_management": false, 00:12:10.660 "zone_append": false, 00:12:10.660 "compare": false, 00:12:10.660 "compare_and_write": false, 00:12:10.660 "abort": true, 00:12:10.660 "seek_hole": false, 00:12:10.660 "seek_data": false, 00:12:10.660 "copy": true, 00:12:10.660 "nvme_iov_md": false 00:12:10.660 }, 00:12:10.660 "memory_domains": [ 00:12:10.660 { 00:12:10.660 "dma_device_id": "system", 00:12:10.660 "dma_device_type": 1 00:12:10.660 }, 00:12:10.660 { 00:12:10.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.660 "dma_device_type": 2 00:12:10.660 } 00:12:10.660 ], 00:12:10.660 "driver_specific": {} 00:12:10.660 } 00:12:10.660 ] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 Null_1 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:10.660 [ 00:12:10.660 { 00:12:10.660 "name": "Null_1", 00:12:10.660 "aliases": [ 00:12:10.660 "f8534eba-c670-4e3c-96c7-29cf8f72af73" 00:12:10.660 ], 00:12:10.660 "product_name": "Null disk", 00:12:10.660 "block_size": 512, 00:12:10.660 "num_blocks": 262144, 00:12:10.660 "uuid": "f8534eba-c670-4e3c-96c7-29cf8f72af73", 00:12:10.660 "assigned_rate_limits": { 00:12:10.660 "rw_ios_per_sec": 0, 00:12:10.660 "rw_mbytes_per_sec": 0, 00:12:10.660 "r_mbytes_per_sec": 0, 00:12:10.660 "w_mbytes_per_sec": 0 00:12:10.660 }, 00:12:10.660 "claimed": false, 00:12:10.660 "zoned": false, 00:12:10.660 "supported_io_types": { 00:12:10.660 "read": true, 00:12:10.660 "write": true, 00:12:10.660 "unmap": false, 00:12:10.660 "flush": false, 00:12:10.660 "reset": true, 00:12:10.660 "nvme_admin": false, 00:12:10.660 "nvme_io": false, 00:12:10.660 "nvme_io_md": false, 00:12:10.660 "write_zeroes": true, 00:12:10.660 "zcopy": false, 00:12:10.660 "get_zone_info": false, 00:12:10.660 "zone_management": false, 00:12:10.660 "zone_append": false, 00:12:10.660 "compare": false, 00:12:10.660 "compare_and_write": false, 00:12:10.660 "abort": true, 00:12:10.660 "seek_hole": false, 00:12:10.660 "seek_data": false, 00:12:10.660 "copy": false, 00:12:10.660 "nvme_iov_md": false 00:12:10.660 }, 00:12:10.660 "driver_specific": {} 00:12:10.660 } 00:12:10.660 ] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:10.660 10:22:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:10.660 Running I/O for 60 seconds... 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 68281.57 273126.26 0.00 0.00 276480.00 0.00 0.00 ' 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=68281.57 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 68281 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=68281 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=17000 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 17000 -gt 1000 ']' 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.935 10:22:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:15.935 ************************************ 00:12:15.935 START TEST bdev_qos_iops 00:12:15.935 ************************************ 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 17000 IOPS Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=17000 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:15.935 10:22:28 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 17000.28 68001.12 0.00 0.00 69428.00 0.00 0.00 ' 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=17000.28 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 17000 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=17000 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=15300 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=18700 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 17000 -lt 15300 ']' 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 17000 -gt 18700 ']' 00:12:21.204 00:12:21.204 real 0m5.257s 00:12:21.204 user 0m0.101s 00:12:21.204 sys 0m0.052s 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.204 10:22:33 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:21.204 ************************************ 00:12:21.204 END TEST bdev_qos_iops 00:12:21.204 ************************************ 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:21.204 10:22:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 20813.13 83252.52 0.00 0.00 84992.00 0.00 0.00 ' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=84992.00 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 84992 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=84992 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.478 10:22:39 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:26.478 ************************************ 00:12:26.478 START TEST bdev_qos_bw 00:12:26.478 ************************************ 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:26.478 10:22:39 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2049.67 8198.69 0.00 0.00 8364.00 0.00 0.00 ' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8364.00 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8364 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8364 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8364 -lt 7372 ']' 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8364 -gt 9011 ']' 00:12:31.752 00:12:31.752 real 0m5.242s 00:12:31.752 user 0m0.103s 00:12:31.752 sys 0m0.043s 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.752 10:22:44 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:31.752 ************************************ 00:12:31.752 END TEST bdev_qos_bw 00:12:31.752 ************************************ 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:31.752 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.753 10:22:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:31.753 ************************************ 00:12:31.753 START TEST bdev_qos_ro_bw 00:12:31.753 ************************************ 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:31.753 10:22:44 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.76 2051.04 0.00 0.00 2064.00 0.00 0.00 ' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']' 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']' 00:12:37.096 00:12:37.096 real 0m5.167s 00:12:37.096 user 0m0.099s 00:12:37.096 sys 0m0.041s 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.096 10:22:49 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:12:37.096 ************************************ 00:12:37.096 END TEST bdev_qos_ro_bw 00:12:37.096 ************************************ 00:12:37.096 10:22:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:37.096 10:22:49 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.096 10:22:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:37.665 00:12:37.665 Latency(us) 00:12:37.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.665 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:37.665 Malloc_0 : 26.68 22997.43 89.83 0.00 0.00 11026.03 1848.12 503316.48 00:12:37.665 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:37.665 Null_1 : 26.83 21757.01 84.99 0.00 0.00 11739.89 720.90 150156.08 00:12:37.665 =================================================================================================================== 00:12:37.665 Total : 44754.45 174.82 0.00 0.00 11374.04 720.90 503316.48 00:12:37.665 0 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 3332088 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 3332088 ']' 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 3332088 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3332088 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3332088' 00:12:37.665 killing process with pid 3332088 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 3332088 00:12:37.665 Received shutdown signal, test time was about 26.894200 seconds 00:12:37.665 00:12:37.665 Latency(us) 00:12:37.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.665 =================================================================================================================== 00:12:37.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.665 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 3332088 00:12:37.925 10:22:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:12:37.925 00:12:37.925 real 0m28.380s 00:12:37.925 user 0m29.176s 00:12:37.925 sys 0m0.828s 00:12:37.925 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.925 10:22:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:37.925 ************************************ 00:12:37.925 END TEST bdev_qos 00:12:37.925 ************************************ 00:12:37.925 10:22:50 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:37.925 10:22:50 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.925 10:22:50 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.925 10:22:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:37.925 ************************************ 00:12:37.925 START TEST bdev_qd_sampling 00:12:37.925 ************************************ 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=3336939 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 3336939' 00:12:37.925 Process bdev QD sampling period testing pid: 3336939 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 3336939 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 3336939 ']' 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.925 10:22:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:37.925 [2024-07-26 10:22:50.788404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:37.925 [2024-07-26 10:22:50.788461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336939 ] 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:38.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.184 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:38.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.185 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:38.185 [2024-07-26 10:22:50.920073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.185 [2024-07-26 10:22:50.965569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.185 [2024-07-26 10:22:50.965575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:39.123 Malloc_QD 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:39.123 [ 00:12:39.123 { 00:12:39.123 "name": "Malloc_QD", 00:12:39.123 "aliases": [ 00:12:39.123 "a4b978d3-35cd-4829-98f3-196e5772e212" 00:12:39.123 ], 00:12:39.123 "product_name": "Malloc disk", 00:12:39.123 "block_size": 512, 00:12:39.123 "num_blocks": 262144, 00:12:39.123 "uuid": "a4b978d3-35cd-4829-98f3-196e5772e212", 00:12:39.123 "assigned_rate_limits": { 00:12:39.123 "rw_ios_per_sec": 0, 00:12:39.123 "rw_mbytes_per_sec": 0, 00:12:39.123 "r_mbytes_per_sec": 0, 00:12:39.123 "w_mbytes_per_sec": 0 00:12:39.123 }, 00:12:39.123 "claimed": false, 00:12:39.123 "zoned": false, 00:12:39.123 "supported_io_types": { 00:12:39.123 "read": true, 00:12:39.123 "write": true, 00:12:39.123 "unmap": true, 00:12:39.123 "flush": true, 00:12:39.123 "reset": true, 00:12:39.123 "nvme_admin": false, 00:12:39.123 "nvme_io": false, 00:12:39.123 "nvme_io_md": false, 00:12:39.123 "write_zeroes": true, 00:12:39.123 "zcopy": true, 00:12:39.123 "get_zone_info": false, 00:12:39.123 "zone_management": false, 00:12:39.123 "zone_append": false, 00:12:39.123 "compare": false, 00:12:39.123 "compare_and_write": false, 00:12:39.123 "abort": true, 00:12:39.123 "seek_hole": false, 00:12:39.123 "seek_data": false, 00:12:39.123 "copy": true, 00:12:39.123 "nvme_iov_md": false 00:12:39.123 }, 00:12:39.123 "memory_domains": [ 00:12:39.123 { 00:12:39.123 "dma_device_id": "system", 00:12:39.123 "dma_device_type": 1 00:12:39.123 }, 00:12:39.123 { 00:12:39.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.123 "dma_device_type": 2 00:12:39.123 } 00:12:39.123 ], 00:12:39.123 "driver_specific": {} 00:12:39.123 } 00:12:39.123 ] 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:12:39.123 10:22:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.123 Running I/O for 5 seconds... 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:12:41.030 "tick_rate": 2500000000, 00:12:41.030 "ticks": 14454300144253748, 00:12:41.030 "bdevs": [ 00:12:41.030 { 00:12:41.030 "name": "Malloc_QD", 00:12:41.030 "bytes_read": 782283264, 00:12:41.030 "num_read_ops": 190980, 00:12:41.030 "bytes_written": 0, 00:12:41.030 "num_write_ops": 0, 00:12:41.030 "bytes_unmapped": 0, 00:12:41.030 "num_unmap_ops": 0, 00:12:41.030 "bytes_copied": 0, 00:12:41.030 "num_copy_ops": 0, 00:12:41.030 "read_latency_ticks": 2437506129540, 00:12:41.030 "max_read_latency_ticks": 15550026, 00:12:41.030 "min_read_latency_ticks": 277984, 00:12:41.030 "write_latency_ticks": 0, 00:12:41.030 "max_write_latency_ticks": 0, 00:12:41.030 "min_write_latency_ticks": 0, 00:12:41.030 "unmap_latency_ticks": 0, 00:12:41.030 "max_unmap_latency_ticks": 0, 00:12:41.030 "min_unmap_latency_ticks": 0, 00:12:41.030 "copy_latency_ticks": 0, 00:12:41.030 "max_copy_latency_ticks": 0, 00:12:41.030 "min_copy_latency_ticks": 0, 00:12:41.030 "io_error": {}, 00:12:41.030 "queue_depth_polling_period": 10, 00:12:41.030 "queue_depth": 512, 00:12:41.030 "io_time": 20, 00:12:41.030 "weighted_io_time": 10240 00:12:41.030 } 00:12:41.030 ] 00:12:41.030 }' 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.030 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:41.030 00:12:41.030 Latency(us) 00:12:41.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.030 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:41.030 Malloc_QD : 1.98 49919.63 195.00 0.00 0.00 5115.87 1363.15 5478.81 00:12:41.030 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:41.031 Malloc_QD : 1.99 50250.32 196.29 0.00 0.00 5082.74 924.06 6239.03 00:12:41.031 =================================================================================================================== 00:12:41.031 Total : 100169.95 391.29 0.00 0.00 5099.24 924.06 6239.03 00:12:41.031 0 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 3336939 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 3336939 ']' 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 3336939 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3336939 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3336939' 00:12:41.031 killing process with pid 3336939 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 3336939 00:12:41.031 Received shutdown signal, test time was about 2.072231 seconds 00:12:41.031 00:12:41.031 Latency(us) 00:12:41.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.031 =================================================================================================================== 00:12:41.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.031 10:22:53 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 3336939 00:12:41.290 10:22:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:12:41.290 00:12:41.290 real 0m3.374s 00:12:41.290 user 0m6.652s 00:12:41.290 sys 0m0.429s 00:12:41.290 10:22:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.290 10:22:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:12:41.290 ************************************ 00:12:41.290 END TEST bdev_qd_sampling 00:12:41.290 ************************************ 00:12:41.290 10:22:54 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:12:41.290 10:22:54 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:41.290 10:22:54 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.290 10:22:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:41.290 ************************************ 00:12:41.290 START TEST bdev_error 00:12:41.290 ************************************ 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=3337524 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 3337524' 00:12:41.550 Process error testing pid: 3337524 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:41.550 10:22:54 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 3337524 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 3337524 ']' 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.550 10:22:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:41.550 [2024-07-26 10:22:54.252455] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:41.550 [2024-07-26 10:22:54.252510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337524 ] 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.550 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:41.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:41.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.551 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:41.551 [2024-07-26 10:22:54.374854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.551 [2024-07-26 10:22:54.420122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 Dev_1 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 [ 00:12:42.488 { 00:12:42.488 "name": "Dev_1", 00:12:42.488 "aliases": [ 00:12:42.488 "1b0ed9d6-e8e2-469c-8aaa-a484b6f08b63" 00:12:42.488 ], 00:12:42.488 "product_name": "Malloc disk", 00:12:42.488 "block_size": 512, 00:12:42.488 "num_blocks": 262144, 00:12:42.488 "uuid": "1b0ed9d6-e8e2-469c-8aaa-a484b6f08b63", 00:12:42.488 "assigned_rate_limits": { 00:12:42.488 "rw_ios_per_sec": 0, 00:12:42.488 "rw_mbytes_per_sec": 0, 00:12:42.488 "r_mbytes_per_sec": 0, 00:12:42.488 "w_mbytes_per_sec": 0 00:12:42.488 }, 00:12:42.488 "claimed": false, 00:12:42.488 "zoned": false, 00:12:42.488 "supported_io_types": { 00:12:42.488 "read": true, 00:12:42.488 "write": true, 00:12:42.488 "unmap": true, 00:12:42.488 "flush": true, 00:12:42.488 "reset": true, 00:12:42.488 "nvme_admin": false, 00:12:42.488 "nvme_io": false, 00:12:42.488 "nvme_io_md": false, 00:12:42.488 "write_zeroes": true, 00:12:42.488 "zcopy": true, 00:12:42.488 "get_zone_info": false, 00:12:42.488 "zone_management": false, 00:12:42.488 "zone_append": false, 00:12:42.488 "compare": false, 00:12:42.488 "compare_and_write": false, 00:12:42.488 "abort": true, 00:12:42.488 "seek_hole": false, 00:12:42.488 "seek_data": false, 00:12:42.488 "copy": true, 00:12:42.488 "nvme_iov_md": false 00:12:42.488 }, 00:12:42.488 "memory_domains": [ 00:12:42.488 { 00:12:42.488 "dma_device_id": "system", 00:12:42.488 "dma_device_type": 1 00:12:42.488 }, 00:12:42.488 { 00:12:42.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.488 "dma_device_type": 2 00:12:42.488 } 00:12:42.488 ], 00:12:42.488 "driver_specific": {} 00:12:42.488 } 00:12:42.488 ] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 true 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 Dev_2 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 [ 00:12:42.488 { 00:12:42.488 "name": "Dev_2", 00:12:42.488 "aliases": [ 00:12:42.488 "134f1f2d-63ab-47ed-8147-d67296c2136b" 00:12:42.488 ], 00:12:42.488 "product_name": "Malloc disk", 00:12:42.488 "block_size": 512, 00:12:42.488 "num_blocks": 262144, 00:12:42.488 "uuid": "134f1f2d-63ab-47ed-8147-d67296c2136b", 00:12:42.488 "assigned_rate_limits": { 00:12:42.488 "rw_ios_per_sec": 0, 00:12:42.488 "rw_mbytes_per_sec": 0, 00:12:42.488 "r_mbytes_per_sec": 0, 00:12:42.488 "w_mbytes_per_sec": 0 00:12:42.488 }, 00:12:42.488 "claimed": false, 00:12:42.488 "zoned": false, 00:12:42.488 "supported_io_types": { 00:12:42.488 "read": true, 00:12:42.488 "write": true, 00:12:42.488 "unmap": true, 00:12:42.488 "flush": true, 00:12:42.488 "reset": true, 00:12:42.488 "nvme_admin": false, 00:12:42.488 "nvme_io": false, 00:12:42.488 "nvme_io_md": false, 00:12:42.488 "write_zeroes": true, 00:12:42.488 "zcopy": true, 00:12:42.488 "get_zone_info": false, 00:12:42.488 "zone_management": false, 00:12:42.488 "zone_append": false, 00:12:42.488 "compare": false, 00:12:42.488 "compare_and_write": false, 00:12:42.488 "abort": true, 00:12:42.488 "seek_hole": false, 00:12:42.488 "seek_data": false, 00:12:42.488 "copy": true, 00:12:42.488 "nvme_iov_md": false 00:12:42.488 }, 00:12:42.488 "memory_domains": [ 00:12:42.488 { 00:12:42.488 "dma_device_id": "system", 00:12:42.488 "dma_device_type": 1 00:12:42.488 }, 00:12:42.488 { 00:12:42.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.488 "dma_device_type": 2 00:12:42.488 } 00:12:42.488 ], 00:12:42.488 "driver_specific": {} 00:12:42.488 } 00:12:42.488 ] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:42.488 10:22:55 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.488 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:12:42.489 10:22:55 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:42.748 Running I/O for 5 seconds... 00:12:43.685 10:22:56 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 3337524 00:12:43.685 10:22:56 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 3337524' 00:12:43.685 Process is existed as continue on error is set. Pid: 3337524 00:12:43.685 10:22:56 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.685 10:22:56 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:43.685 10:22:56 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.685 10:22:56 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:12:43.685 Timeout while waiting for response: 00:12:43.685 00:12:43.685 00:12:47.880 00:12:47.880 Latency(us) 00:12:47.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.880 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:47.880 EE_Dev_1 : 0.91 40635.38 158.73 5.52 0.00 390.49 119.60 642.25 00:12:47.880 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:47.880 Dev_2 : 5.00 88067.01 344.01 0.00 0.00 178.49 60.62 19188.94 00:12:47.880 =================================================================================================================== 00:12:47.880 Total : 128702.39 502.74 5.52 0.00 194.83 60.62 19188.94 00:12:48.817 10:23:01 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 3337524 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 3337524 ']' 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 3337524 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3337524 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3337524' 00:12:48.817 killing process with pid 3337524 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 3337524 00:12:48.817 Received shutdown signal, test time was about 5.000000 seconds 00:12:48.817 00:12:48.817 Latency(us) 00:12:48.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.817 =================================================================================================================== 00:12:48.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 3337524 00:12:48.817 10:23:01 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=3338822 00:12:48.817 10:23:01 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 3338822' 00:12:48.817 Process error testing pid: 3338822 00:12:48.817 10:23:01 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:12:48.817 10:23:01 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 3338822 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 3338822 ']' 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.817 10:23:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:48.817 [2024-07-26 10:23:01.684390] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:48.817 [2024-07-26 10:23:01.684454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338822 ] 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:49.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:49.077 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:49.077 [2024-07-26 10:23:01.806952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.077 [2024-07-26 10:23:01.851275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:12:50.014 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.014 Dev_1 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.014 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.014 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 [ 00:12:50.015 { 00:12:50.015 "name": "Dev_1", 00:12:50.015 "aliases": [ 00:12:50.015 "c6a35b25-813f-4887-86b3-d2019b156fe2" 00:12:50.015 ], 00:12:50.015 "product_name": "Malloc disk", 00:12:50.015 "block_size": 512, 00:12:50.015 "num_blocks": 262144, 00:12:50.015 "uuid": "c6a35b25-813f-4887-86b3-d2019b156fe2", 00:12:50.015 "assigned_rate_limits": { 00:12:50.015 "rw_ios_per_sec": 0, 00:12:50.015 "rw_mbytes_per_sec": 0, 00:12:50.015 "r_mbytes_per_sec": 0, 00:12:50.015 "w_mbytes_per_sec": 0 00:12:50.015 }, 00:12:50.015 "claimed": false, 00:12:50.015 "zoned": false, 00:12:50.015 "supported_io_types": { 00:12:50.015 "read": true, 00:12:50.015 "write": true, 00:12:50.015 "unmap": true, 00:12:50.015 "flush": true, 00:12:50.015 "reset": true, 00:12:50.015 "nvme_admin": false, 00:12:50.015 "nvme_io": false, 00:12:50.015 "nvme_io_md": false, 00:12:50.015 "write_zeroes": true, 00:12:50.015 "zcopy": true, 00:12:50.015 "get_zone_info": false, 00:12:50.015 "zone_management": false, 00:12:50.015 "zone_append": false, 00:12:50.015 "compare": false, 00:12:50.015 "compare_and_write": false, 00:12:50.015 "abort": true, 00:12:50.015 "seek_hole": false, 00:12:50.015 "seek_data": false, 00:12:50.015 "copy": true, 00:12:50.015 "nvme_iov_md": false 00:12:50.015 }, 00:12:50.015 "memory_domains": [ 00:12:50.015 { 00:12:50.015 "dma_device_id": "system", 00:12:50.015 "dma_device_type": 1 00:12:50.015 }, 00:12:50.015 { 00:12:50.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.015 "dma_device_type": 2 00:12:50.015 } 00:12:50.015 ], 00:12:50.015 "driver_specific": {} 00:12:50.015 } 00:12:50.015 ] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 true 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 Dev_2 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 [ 00:12:50.015 { 00:12:50.015 "name": "Dev_2", 00:12:50.015 "aliases": [ 00:12:50.015 "46a7129b-af7d-40cb-9d99-b2dd36de6fd9" 00:12:50.015 ], 00:12:50.015 "product_name": "Malloc disk", 00:12:50.015 "block_size": 512, 00:12:50.015 "num_blocks": 262144, 00:12:50.015 "uuid": "46a7129b-af7d-40cb-9d99-b2dd36de6fd9", 00:12:50.015 "assigned_rate_limits": { 00:12:50.015 "rw_ios_per_sec": 0, 00:12:50.015 "rw_mbytes_per_sec": 0, 00:12:50.015 "r_mbytes_per_sec": 0, 00:12:50.015 "w_mbytes_per_sec": 0 00:12:50.015 }, 00:12:50.015 "claimed": false, 00:12:50.015 "zoned": false, 00:12:50.015 "supported_io_types": { 00:12:50.015 "read": true, 00:12:50.015 "write": true, 00:12:50.015 "unmap": true, 00:12:50.015 "flush": true, 00:12:50.015 "reset": true, 00:12:50.015 "nvme_admin": false, 00:12:50.015 "nvme_io": false, 00:12:50.015 "nvme_io_md": false, 00:12:50.015 "write_zeroes": true, 00:12:50.015 "zcopy": true, 00:12:50.015 "get_zone_info": false, 00:12:50.015 "zone_management": false, 00:12:50.015 "zone_append": false, 00:12:50.015 "compare": false, 00:12:50.015 "compare_and_write": false, 00:12:50.015 "abort": true, 00:12:50.015 "seek_hole": false, 00:12:50.015 "seek_data": false, 00:12:50.015 "copy": true, 00:12:50.015 "nvme_iov_md": false 00:12:50.015 }, 00:12:50.015 "memory_domains": [ 00:12:50.015 { 00:12:50.015 "dma_device_id": "system", 00:12:50.015 "dma_device_type": 1 00:12:50.015 }, 00:12:50.015 { 00:12:50.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.015 "dma_device_type": 2 00:12:50.015 } 00:12:50.015 ], 00:12:50.015 "driver_specific": {} 00:12:50.015 } 00:12:50.015 ] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 3338822 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3338822 00:12:50.015 10:23:02 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.015 10:23:02 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 3338822 00:12:50.015 Running I/O for 5 seconds... 00:12:50.015 task offset: 153896 on job bdev=EE_Dev_1 fails 00:12:50.015 00:12:50.015 Latency(us) 00:12:50.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.015 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:50.015 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:12:50.015 EE_Dev_1 : 0.00 31837.92 124.37 7235.89 0.00 343.87 120.42 609.48 00:12:50.015 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:50.015 Dev_2 : 0.00 19740.90 77.11 0.00 0.00 608.31 115.51 1127.22 00:12:50.015 =================================================================================================================== 00:12:50.015 Total : 51578.82 201.48 7235.89 0.00 487.29 115.51 1127.22 00:12:50.015 [2024-07-26 10:23:02.848756] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:50.015 request: 00:12:50.015 { 00:12:50.015 "method": "perform_tests", 00:12:50.015 "req_id": 1 00:12:50.015 } 00:12:50.015 Got JSON-RPC error response 00:12:50.015 response: 00:12:50.015 { 00:12:50.015 "code": -32603, 00:12:50.015 "message": "bdevperf failed with error Operation not permitted" 00:12:50.015 } 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.275 00:12:50.275 real 0m8.894s 00:12:50.275 user 0m9.298s 00:12:50.275 sys 0m0.815s 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.275 10:23:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:12:50.275 ************************************ 00:12:50.275 END TEST bdev_error 00:12:50.275 ************************************ 00:12:50.275 10:23:03 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:12:50.275 10:23:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.275 10:23:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.275 10:23:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:50.275 ************************************ 00:12:50.275 START TEST bdev_stat 00:12:50.275 ************************************ 00:12:50.275 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:12:50.275 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=3339108 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 3339108' 00:12:50.535 Process Bdev IO statistics testing pid: 3339108 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 3339108 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 3339108 ']' 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.535 10:23:03 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:50.535 [2024-07-26 10:23:03.235185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:50.535 [2024-07-26 10:23:03.235248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339108 ] 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:50.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:50.535 [2024-07-26 10:23:03.371341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.535 [2024-07-26 10:23:03.415866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.535 [2024-07-26 10:23:03.415872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 Malloc_STAT 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 [ 00:12:51.471 { 00:12:51.471 "name": "Malloc_STAT", 00:12:51.471 "aliases": [ 00:12:51.471 "ad8f055d-c34f-458a-a84c-83d0a9cb96a2" 00:12:51.471 ], 00:12:51.471 "product_name": "Malloc disk", 00:12:51.471 "block_size": 512, 00:12:51.471 "num_blocks": 262144, 00:12:51.471 "uuid": "ad8f055d-c34f-458a-a84c-83d0a9cb96a2", 00:12:51.471 "assigned_rate_limits": { 00:12:51.471 "rw_ios_per_sec": 0, 00:12:51.471 "rw_mbytes_per_sec": 0, 00:12:51.471 "r_mbytes_per_sec": 0, 00:12:51.471 "w_mbytes_per_sec": 0 00:12:51.471 }, 00:12:51.471 "claimed": false, 00:12:51.471 "zoned": false, 00:12:51.471 "supported_io_types": { 00:12:51.471 "read": true, 00:12:51.471 "write": true, 00:12:51.471 "unmap": true, 00:12:51.471 "flush": true, 00:12:51.471 "reset": true, 00:12:51.471 "nvme_admin": false, 00:12:51.471 "nvme_io": false, 00:12:51.471 "nvme_io_md": false, 00:12:51.471 "write_zeroes": true, 00:12:51.471 "zcopy": true, 00:12:51.471 "get_zone_info": false, 00:12:51.471 "zone_management": false, 00:12:51.471 "zone_append": false, 00:12:51.471 "compare": false, 00:12:51.471 "compare_and_write": false, 00:12:51.471 "abort": true, 00:12:51.471 "seek_hole": false, 00:12:51.471 "seek_data": false, 00:12:51.471 "copy": true, 00:12:51.471 "nvme_iov_md": false 00:12:51.471 }, 00:12:51.471 "memory_domains": [ 00:12:51.471 { 00:12:51.471 "dma_device_id": "system", 00:12:51.471 "dma_device_type": 1 00:12:51.471 }, 00:12:51.471 { 00:12:51.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.471 "dma_device_type": 2 00:12:51.471 } 00:12:51.471 ], 00:12:51.471 "driver_specific": {} 00:12:51.471 } 00:12:51.471 ] 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:12:51.471 10:23:04 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.471 Running I/O for 10 seconds... 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:12:53.375 "tick_rate": 2500000000, 00:12:53.375 "ticks": 14454331199400040, 00:12:53.375 "bdevs": [ 00:12:53.375 { 00:12:53.375 "name": "Malloc_STAT", 00:12:53.375 "bytes_read": 791720448, 00:12:53.375 "num_read_ops": 193284, 00:12:53.375 "bytes_written": 0, 00:12:53.375 "num_write_ops": 0, 00:12:53.375 "bytes_unmapped": 0, 00:12:53.375 "num_unmap_ops": 0, 00:12:53.375 "bytes_copied": 0, 00:12:53.375 "num_copy_ops": 0, 00:12:53.375 "read_latency_ticks": 2432614662606, 00:12:53.375 "max_read_latency_ticks": 15421698, 00:12:53.375 "min_read_latency_ticks": 272704, 00:12:53.375 "write_latency_ticks": 0, 00:12:53.375 "max_write_latency_ticks": 0, 00:12:53.375 "min_write_latency_ticks": 0, 00:12:53.375 "unmap_latency_ticks": 0, 00:12:53.375 "max_unmap_latency_ticks": 0, 00:12:53.375 "min_unmap_latency_ticks": 0, 00:12:53.375 "copy_latency_ticks": 0, 00:12:53.375 "max_copy_latency_ticks": 0, 00:12:53.375 "min_copy_latency_ticks": 0, 00:12:53.375 "io_error": {} 00:12:53.375 } 00:12:53.375 ] 00:12:53.375 }' 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:12:53.375 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=193284 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:12:53.660 "tick_rate": 2500000000, 00:12:53.660 "ticks": 14454331364602732, 00:12:53.660 "name": "Malloc_STAT", 00:12:53.660 "channels": [ 00:12:53.660 { 00:12:53.660 "thread_id": 2, 00:12:53.660 "bytes_read": 407896064, 00:12:53.660 "num_read_ops": 99584, 00:12:53.660 "bytes_written": 0, 00:12:53.660 "num_write_ops": 0, 00:12:53.660 "bytes_unmapped": 0, 00:12:53.660 "num_unmap_ops": 0, 00:12:53.660 "bytes_copied": 0, 00:12:53.660 "num_copy_ops": 0, 00:12:53.660 "read_latency_ticks": 1257474276820, 00:12:53.660 "max_read_latency_ticks": 13355592, 00:12:53.660 "min_read_latency_ticks": 8083376, 00:12:53.660 "write_latency_ticks": 0, 00:12:53.660 "max_write_latency_ticks": 0, 00:12:53.660 "min_write_latency_ticks": 0, 00:12:53.660 "unmap_latency_ticks": 0, 00:12:53.660 "max_unmap_latency_ticks": 0, 00:12:53.660 "min_unmap_latency_ticks": 0, 00:12:53.660 "copy_latency_ticks": 0, 00:12:53.660 "max_copy_latency_ticks": 0, 00:12:53.660 "min_copy_latency_ticks": 0 00:12:53.660 }, 00:12:53.660 { 00:12:53.660 "thread_id": 3, 00:12:53.660 "bytes_read": 411041792, 00:12:53.660 "num_read_ops": 100352, 00:12:53.660 "bytes_written": 0, 00:12:53.660 "num_write_ops": 0, 00:12:53.660 "bytes_unmapped": 0, 00:12:53.660 "num_unmap_ops": 0, 00:12:53.660 "bytes_copied": 0, 00:12:53.660 "num_copy_ops": 0, 00:12:53.660 "read_latency_ticks": 1258969270824, 00:12:53.660 "max_read_latency_ticks": 15421698, 00:12:53.660 "min_read_latency_ticks": 8047698, 00:12:53.660 "write_latency_ticks": 0, 00:12:53.660 "max_write_latency_ticks": 0, 00:12:53.660 "min_write_latency_ticks": 0, 00:12:53.660 "unmap_latency_ticks": 0, 00:12:53.660 "max_unmap_latency_ticks": 0, 00:12:53.660 "min_unmap_latency_ticks": 0, 00:12:53.660 "copy_latency_ticks": 0, 00:12:53.660 "max_copy_latency_ticks": 0, 00:12:53.660 "min_copy_latency_ticks": 0 00:12:53.660 } 00:12:53.660 ] 00:12:53.660 }' 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=99584 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=99584 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=100352 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=199936 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.660 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:12:53.660 "tick_rate": 2500000000, 00:12:53.660 "ticks": 14454331668159376, 00:12:53.660 "bdevs": [ 00:12:53.660 { 00:12:53.660 "name": "Malloc_STAT", 00:12:53.660 "bytes_read": 869315072, 00:12:53.660 "num_read_ops": 212228, 00:12:53.660 "bytes_written": 0, 00:12:53.660 "num_write_ops": 0, 00:12:53.660 "bytes_unmapped": 0, 00:12:53.660 "num_unmap_ops": 0, 00:12:53.660 "bytes_copied": 0, 00:12:53.660 "num_copy_ops": 0, 00:12:53.660 "read_latency_ticks": 2671103093628, 00:12:53.660 "max_read_latency_ticks": 15421698, 00:12:53.660 "min_read_latency_ticks": 272704, 00:12:53.660 "write_latency_ticks": 0, 00:12:53.661 "max_write_latency_ticks": 0, 00:12:53.661 "min_write_latency_ticks": 0, 00:12:53.661 "unmap_latency_ticks": 0, 00:12:53.661 "max_unmap_latency_ticks": 0, 00:12:53.661 "min_unmap_latency_ticks": 0, 00:12:53.661 "copy_latency_ticks": 0, 00:12:53.661 "max_copy_latency_ticks": 0, 00:12:53.661 "min_copy_latency_ticks": 0, 00:12:53.661 "io_error": {} 00:12:53.661 } 00:12:53.661 ] 00:12:53.661 }' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=212228 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 199936 -lt 193284 ']' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 199936 -gt 212228 ']' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:53.661 00:12:53.661 Latency(us) 00:12:53.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.661 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:53.661 Malloc_STAT : 2.17 50592.18 197.63 0.00 0.00 5047.56 1579.42 5347.74 00:12:53.661 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:53.661 Malloc_STAT : 2.17 50937.72 198.98 0.00 0.00 5014.08 1363.15 6186.60 00:12:53.661 =================================================================================================================== 00:12:53.661 Total : 101529.91 396.60 0.00 0.00 5030.76 1363.15 6186.60 00:12:53.661 0 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 3339108 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 3339108 ']' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 3339108 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.661 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3339108 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3339108' 00:12:53.955 killing process with pid 3339108 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 3339108 00:12:53.955 Received shutdown signal, test time was about 2.253003 seconds 00:12:53.955 00:12:53.955 Latency(us) 00:12:53.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.955 =================================================================================================================== 00:12:53.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 3339108 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:12:53.955 00:12:53.955 real 0m3.567s 00:12:53.955 user 0m7.183s 00:12:53.955 sys 0m0.468s 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.955 10:23:06 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:12:53.955 ************************************ 00:12:53.955 END TEST bdev_stat 00:12:53.955 ************************************ 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:12:53.955 10:23:06 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:12:53.955 00:12:53.955 real 1m51.967s 00:12:53.955 user 7m20.931s 00:12:53.955 sys 0m21.289s 00:12:53.955 10:23:06 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.955 10:23:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:53.955 ************************************ 00:12:53.955 END TEST blockdev_general 00:12:53.955 ************************************ 00:12:53.955 10:23:06 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:12:53.955 10:23:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:53.955 10:23:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.955 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 ************************************ 00:12:54.214 START TEST bdev_raid 00:12:54.214 ************************************ 00:12:54.214 10:23:06 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:12:54.214 * Looking for test storage... 00:12:54.214 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:12:54.214 10:23:06 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:12:54.214 10:23:06 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:54.214 10:23:06 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:12:54.214 10:23:06 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:12:54.214 10:23:07 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:12:54.214 10:23:07 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:12:54.214 10:23:07 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:12:54.214 10:23:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.214 10:23:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.214 10:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 ************************************ 00:12:54.214 START TEST raid0_resize_superblock_test 00:12:54.214 ************************************ 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=3339925 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 3339925' 00:12:54.214 Process raid pid: 3339925 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 3339925 /var/tmp/spdk-raid.sock 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3339925 ']' 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:54.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.214 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:54.214 [2024-07-26 10:23:07.095622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:54.214 [2024-07-26 10:23:07.095680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.473 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:54.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.473 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:54.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:54.474 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:54.474 [2024-07-26 10:23:07.231733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.474 [2024-07-26 10:23:07.276331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.474 [2024-07-26 10:23:07.341638] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.474 [2024-07-26 10:23:07.341674] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.040 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.040 10:23:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:55.041 10:23:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:12:55.609 malloc0 00:12:55.609 10:23:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:12:55.609 [2024-07-26 10:23:08.448737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:55.609 [2024-07-26 10:23:08.448783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.609 [2024-07-26 10:23:08.448805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e2790 00:12:55.609 [2024-07-26 10:23:08.448817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.609 [2024-07-26 10:23:08.450288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.609 [2024-07-26 10:23:08.450314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:55.609 pt0 00:12:55.609 10:23:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:12:55.867 abf0e047-59fe-4386-9b74-52b0c280ceb8 00:12:55.867 10:23:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:12:56.125 0e4b22f0-72e7-4418-a1fc-a20c2994d8f7 00:12:56.125 10:23:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:12:56.384 f18dcfea-5cfe-4ff1-8e34-5733d6c86af8 00:12:56.384 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:12:56.384 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:12:56.643 [2024-07-26 10:23:09.350049] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e4b22f0-72e7-4418-a1fc-a20c2994d8f7 is claimed 00:12:56.643 [2024-07-26 10:23:09.350125] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f18dcfea-5cfe-4ff1-8e34-5733d6c86af8 is claimed 00:12:56.643 [2024-07-26 10:23:09.350240] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x209a550 00:12:56.643 [2024-07-26 10:23:09.350251] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:56.643 [2024-07-26 10:23:09.350428] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209aa30 00:12:56.643 [2024-07-26 10:23:09.350564] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209a550 00:12:56.643 [2024-07-26 10:23:09.350573] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x209a550 00:12:56.643 [2024-07-26 10:23:09.350680] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.643 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:12:56.643 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:56.903 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:12:56.903 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:12:56.903 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:56.903 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:57.162 [2024-07-26 10:23:10.015965] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:57.162 10:23:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:57.162 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:12:57.162 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:12:57.421 [2024-07-26 10:23:10.184351] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:57.421 [2024-07-26 10:23:10.184383] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0e4b22f0-72e7-4418-a1fc-a20c2994d8f7' was resized: old size 131072, new size 204800 00:12:57.421 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:12:57.680 [2024-07-26 10:23:10.412900] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:57.680 [2024-07-26 10:23:10.412916] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f18dcfea-5cfe-4ff1-8e34-5733d6c86af8' was resized: old size 131072, new size 204800 00:12:57.680 [2024-07-26 10:23:10.412937] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:57.680 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:12:57.680 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:12:58.248 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:12:58.248 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:12:58.248 10:23:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:12:58.507 [2024-07-26 10:23:11.367525] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:12:58.507 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:12:58.508 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:12:58.508 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:12:58.767 [2024-07-26 10:23:11.595933] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:58.767 [2024-07-26 10:23:11.595984] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:58.767 [2024-07-26 10:23:11.595993] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.767 [2024-07-26 10:23:11.596004] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:58.767 [2024-07-26 10:23:11.596081] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.767 [2024-07-26 10:23:11.596110] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.767 [2024-07-26 10:23:11.596120] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209a550 name Raid, state offline 00:12:58.767 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:12:59.026 [2024-07-26 10:23:11.812478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:59.027 [2024-07-26 10:23:11.812518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.027 [2024-07-26 10:23:11.812536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e2ad0 00:12:59.027 [2024-07-26 10:23:11.812548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.027 [2024-07-26 10:23:11.814048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.027 [2024-07-26 10:23:11.814075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:59.027 [2024-07-26 10:23:11.815180] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0e4b22f0-72e7-4418-a1fc-a20c2994d8f7 00:12:59.027 [2024-07-26 10:23:11.815220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e4b22f0-72e7-4418-a1fc-a20c2994d8f7 is claimed 00:12:59.027 [2024-07-26 10:23:11.815301] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f18dcfea-5cfe-4ff1-8e34-5733d6c86af8 00:12:59.027 [2024-07-26 10:23:11.815318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f18dcfea-5cfe-4ff1-8e34-5733d6c86af8 is claimed 00:12:59.027 [2024-07-26 10:23:11.815417] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f18dcfea-5cfe-4ff1-8e34-5733d6c86af8 (2) smaller than existing raid bdev Raid (3) 00:12:59.027 [2024-07-26 10:23:11.815445] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fc1e70 00:12:59.027 [2024-07-26 10:23:11.815452] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:59.027 [2024-07-26 10:23:11.815599] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f31600 00:12:59.027 [2024-07-26 10:23:11.815726] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fc1e70 00:12:59.027 [2024-07-26 10:23:11.815735] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1fc1e70 00:12:59.027 [2024-07-26 10:23:11.815831] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.027 pt0 00:12:59.027 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:12:59.027 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:59.027 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:12:59.027 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:12:59.286 [2024-07-26 10:23:12.041310] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.287 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:12:59.287 10:23:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 3339925 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3339925 ']' 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3339925 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3339925 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3339925' 00:12:59.287 killing process with pid 3339925 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 3339925 00:12:59.287 [2024-07-26 10:23:12.112686] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.287 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 3339925 00:12:59.287 [2024-07-26 10:23:12.112735] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.287 [2024-07-26 10:23:12.112768] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.287 [2024-07-26 10:23:12.112778] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fc1e70 name Raid, state offline 00:12:59.546 [2024-07-26 10:23:12.191981] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.546 10:23:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:12:59.546 00:12:59.546 real 0m5.328s 00:12:59.546 user 0m8.652s 00:12:59.546 sys 0m1.143s 00:12:59.546 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.546 10:23:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.546 ************************************ 00:12:59.546 END TEST raid0_resize_superblock_test 00:12:59.546 ************************************ 00:12:59.546 10:23:12 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:12:59.546 10:23:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.546 10:23:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.546 10:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.546 ************************************ 00:12:59.546 START TEST raid1_resize_superblock_test 00:12:59.546 ************************************ 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=3340831 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 3340831' 00:12:59.806 Process raid pid: 3340831 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 3340831 /var/tmp/spdk-raid.sock 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3340831 ']' 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.806 10:23:12 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.806 [2024-07-26 10:23:12.506964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:12:59.806 [2024-07-26 10:23:12.507022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.806 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:59.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:59.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.807 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:59.807 [2024-07-26 10:23:12.645185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.807 [2024-07-26 10:23:12.688721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.066 [2024-07-26 10:23:12.747789] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.066 [2024-07-26 10:23:12.747823] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.638 10:23:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.638 10:23:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:00.638 10:23:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:00.898 malloc0 00:13:00.898 10:23:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:01.156 [2024-07-26 10:23:13.946635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:01.156 [2024-07-26 10:23:13.946683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.156 [2024-07-26 10:23:13.946703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1efb790 00:13:01.156 [2024-07-26 10:23:13.946715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.156 [2024-07-26 10:23:13.948064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.156 [2024-07-26 10:23:13.948089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:01.156 pt0 00:13:01.156 10:23:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:01.415 6806cb91-4a11-445b-b2b7-30e72ee06af3 00:13:01.415 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:01.674 28ea50ed-e958-448c-9d4d-51c47737312c 00:13:01.674 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:01.932 f3f1e5e6-e3e7-47ae-93db-71612d591363 00:13:01.932 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:01.932 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:02.191 [2024-07-26 10:23:14.923751] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28ea50ed-e958-448c-9d4d-51c47737312c is claimed 00:13:02.191 [2024-07-26 10:23:14.923836] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f3f1e5e6-e3e7-47ae-93db-71612d591363 is claimed 00:13:02.191 [2024-07-26 10:23:14.923951] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eb3550 00:13:02.191 [2024-07-26 10:23:14.923961] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:02.191 [2024-07-26 10:23:14.924148] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eb3a30 00:13:02.191 [2024-07-26 10:23:14.924293] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eb3550 00:13:02.191 [2024-07-26 10:23:14.924303] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1eb3550 00:13:02.191 [2024-07-26 10:23:14.924413] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.191 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:02.191 10:23:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:02.450 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:02.450 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:02.450 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:02.709 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:02.709 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:02.709 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:02.709 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:02.709 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:02.968 [2024-07-26 10:23:15.858412] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.227 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:03.227 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:03.227 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:13:03.227 10:23:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:03.227 [2024-07-26 10:23:16.094960] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:03.227 [2024-07-26 10:23:16.094984] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '28ea50ed-e958-448c-9d4d-51c47737312c' was resized: old size 131072, new size 204800 00:13:03.227 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:03.486 [2024-07-26 10:23:16.319519] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:03.486 [2024-07-26 10:23:16.319541] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f3f1e5e6-e3e7-47ae-93db-71612d591363' was resized: old size 131072, new size 204800 00:13:03.486 [2024-07-26 10:23:16.319564] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:03.486 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:03.486 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:03.745 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:03.745 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:03.745 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:04.004 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:04.004 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:04.004 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:04.004 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:04.004 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:13:04.263 [2024-07-26 10:23:17.005412] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.263 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:04.263 10:23:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:04.263 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:13:04.263 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:04.522 [2024-07-26 10:23:17.229806] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:04.522 [2024-07-26 10:23:17.229859] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:04.522 [2024-07-26 10:23:17.229882] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:04.522 [2024-07-26 10:23:17.229992] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.522 [2024-07-26 10:23:17.230123] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.522 [2024-07-26 10:23:17.230182] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.522 [2024-07-26 10:23:17.230194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eb3550 name Raid, state offline 00:13:04.522 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:04.782 [2024-07-26 10:23:17.454361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:04.782 [2024-07-26 10:23:17.454397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.782 [2024-07-26 10:23:17.454414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1efd650 00:13:04.782 [2024-07-26 10:23:17.454425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.782 [2024-07-26 10:23:17.455893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.782 [2024-07-26 10:23:17.455920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:04.782 [2024-07-26 10:23:17.457098] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 28ea50ed-e958-448c-9d4d-51c47737312c 00:13:04.782 [2024-07-26 10:23:17.457133] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28ea50ed-e958-448c-9d4d-51c47737312c is claimed 00:13:04.782 [2024-07-26 10:23:17.457226] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f3f1e5e6-e3e7-47ae-93db-71612d591363 00:13:04.782 [2024-07-26 10:23:17.457244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f3f1e5e6-e3e7-47ae-93db-71612d591363 is claimed 00:13:04.782 [2024-07-26 10:23:17.457348] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f3f1e5e6-e3e7-47ae-93db-71612d591363 (2) smaller than existing raid bdev Raid (3) 00:13:04.782 [2024-07-26 10:23:17.457376] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eb54a0 00:13:04.782 [2024-07-26 10:23:17.457383] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:04.782 [2024-07-26 10:23:17.457540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d361d0 00:13:04.782 [2024-07-26 10:23:17.457675] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eb54a0 00:13:04.782 [2024-07-26 10:23:17.457684] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1eb54a0 00:13:04.782 [2024-07-26 10:23:17.457786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.782 pt0 00:13:04.782 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:04.782 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:04.782 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:04.782 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:13:04.782 [2024-07-26 10:23:17.683204] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 3340831 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3340831 ']' 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3340831 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3340831 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3340831' 00:13:05.041 killing process with pid 3340831 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 3340831 00:13:05.041 [2024-07-26 10:23:17.759536] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.041 [2024-07-26 10:23:17.759585] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.041 [2024-07-26 10:23:17.759627] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.041 [2024-07-26 10:23:17.759637] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eb54a0 name Raid, state offline 00:13:05.041 10:23:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 3340831 00:13:05.041 [2024-07-26 10:23:17.839037] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.301 10:23:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:05.301 00:13:05.301 real 0m5.566s 00:13:05.301 user 0m9.095s 00:13:05.301 sys 0m1.191s 00:13:05.301 10:23:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.301 10:23:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 ************************************ 00:13:05.301 END TEST raid1_resize_superblock_test 00:13:05.301 ************************************ 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:13:05.301 10:23:18 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:05.301 10:23:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:05.301 10:23:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.301 10:23:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 ************************************ 00:13:05.301 START TEST raid_function_test_raid0 00:13:05.301 ************************************ 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=3341946 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 3341946' 00:13:05.301 Process raid pid: 3341946 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 3341946 /var/tmp/spdk-raid.sock 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 3341946 ']' 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:05.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.301 10:23:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:05.301 [2024-07-26 10:23:18.171905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:05.301 [2024-07-26 10:23:18.171959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:05.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.560 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:05.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:05.561 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:05.561 [2024-07-26 10:23:18.309128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.561 [2024-07-26 10:23:18.353442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.561 [2024-07-26 10:23:18.412954] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.561 [2024-07-26 10:23:18.412984] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:06.508 [2024-07-26 10:23:19.278992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:06.508 [2024-07-26 10:23:19.280023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:06.508 [2024-07-26 10:23:19.280072] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1939b60 00:13:06.508 [2024-07-26 10:23:19.280082] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:06.508 [2024-07-26 10:23:19.280318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193bca0 00:13:06.508 [2024-07-26 10:23:19.280416] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1939b60 00:13:06.508 [2024-07-26 10:23:19.280425] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1939b60 00:13:06.508 [2024-07-26 10:23:19.280517] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.508 Base_1 00:13:06.508 Base_2 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:06.508 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.807 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:07.071 [2024-07-26 10:23:19.752268] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1787480 00:13:07.071 /dev/nbd0 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.071 1+0 records in 00:13:07.071 1+0 records out 00:13:07.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298778 s, 13.7 MB/s 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:07.071 10:23:19 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:07.331 { 00:13:07.331 "nbd_device": "/dev/nbd0", 00:13:07.331 "bdev_name": "raid" 00:13:07.331 } 00:13:07.331 ]' 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:07.331 { 00:13:07.331 "nbd_device": "/dev/nbd0", 00:13:07.331 "bdev_name": "raid" 00:13:07.331 } 00:13:07.331 ]' 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:07.331 4096+0 records in 00:13:07.331 4096+0 records out 00:13:07.331 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0284855 s, 73.6 MB/s 00:13:07.331 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:07.899 4096+0 records in 00:13:07.899 4096+0 records out 00:13:07.899 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.459264 s, 4.6 MB/s 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:07.899 128+0 records in 00:13:07.899 128+0 records out 00:13:07.899 65536 bytes (66 kB, 64 KiB) copied, 0.000831574 s, 78.8 MB/s 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:07.899 2035+0 records in 00:13:07.899 2035+0 records out 00:13:07.899 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117426 s, 88.7 MB/s 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:07.899 456+0 records in 00:13:07.899 456+0 records out 00:13:07.899 233472 bytes (233 kB, 228 KiB) copied, 0.00110766 s, 211 MB/s 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.899 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:08.158 [2024-07-26 10:23:20.970224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:08.158 10:23:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 3341946 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 3341946 ']' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 3341946 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.417 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3341946 00:13:08.676 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.676 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.676 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3341946' 00:13:08.676 killing process with pid 3341946 00:13:08.676 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 3341946 00:13:08.676 [2024-07-26 10:23:21.329106] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.676 [2024-07-26 10:23:21.329229] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.676 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 3341946 00:13:08.676 [2024-07-26 10:23:21.329311] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.676 [2024-07-26 10:23:21.329337] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1939b60 name raid, state offline 00:13:08.676 [2024-07-26 10:23:21.349272] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.936 10:23:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:08.936 00:13:08.936 real 0m3.483s 00:13:08.936 user 0m4.425s 00:13:08.936 sys 0m1.258s 00:13:08.936 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.936 10:23:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:08.936 ************************************ 00:13:08.936 END TEST raid_function_test_raid0 00:13:08.936 ************************************ 00:13:08.936 10:23:21 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:13:08.936 10:23:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:08.936 10:23:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.936 10:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.936 ************************************ 00:13:08.936 START TEST raid_function_test_concat 00:13:08.936 ************************************ 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=3342562 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 3342562' 00:13:08.936 Process raid pid: 3342562 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 3342562 /var/tmp/spdk-raid.sock 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 3342562 ']' 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:08.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.936 10:23:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:08.936 [2024-07-26 10:23:21.726604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:08.936 [2024-07-26 10:23:21.726657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:08.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:08.936 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:09.194 [2024-07-26 10:23:21.848931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.194 [2024-07-26 10:23:21.892849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.194 [2024-07-26 10:23:21.946354] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.194 [2024-07-26 10:23:21.946380] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:09.761 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:10.020 [2024-07-26 10:23:22.883831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:10.020 [2024-07-26 10:23:22.884878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:10.020 [2024-07-26 10:23:22.884929] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17a1b60 00:13:10.020 [2024-07-26 10:23:22.884943] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:10.020 [2024-07-26 10:23:22.885184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17a3ca0 00:13:10.020 [2024-07-26 10:23:22.885282] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17a1b60 00:13:10.020 [2024-07-26 10:23:22.885291] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x17a1b60 00:13:10.020 [2024-07-26 10:23:22.885384] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.020 Base_1 00:13:10.020 Base_2 00:13:10.020 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:10.020 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:10.020 10:23:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.279 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:10.538 [2024-07-26 10:23:23.353068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ef480 00:13:10.538 /dev/nbd0 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.538 1+0 records in 00:13:10.538 1+0 records out 00:13:10.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238386 s, 17.2 MB/s 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:10.538 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:10.796 { 00:13:10.796 "nbd_device": "/dev/nbd0", 00:13:10.796 "bdev_name": "raid" 00:13:10.796 } 00:13:10.796 ]' 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:10.796 { 00:13:10.796 "nbd_device": "/dev/nbd0", 00:13:10.796 "bdev_name": "raid" 00:13:10.796 } 00:13:10.796 ]' 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:10.796 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:10.797 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:11.055 4096+0 records in 00:13:11.055 4096+0 records out 00:13:11.055 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0298823 s, 70.2 MB/s 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:11.055 4096+0 records in 00:13:11.055 4096+0 records out 00:13:11.055 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.188815 s, 11.1 MB/s 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:11.055 128+0 records in 00:13:11.055 128+0 records out 00:13:11.055 65536 bytes (66 kB, 64 KiB) copied, 0.000822891 s, 79.6 MB/s 00:13:11.055 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:11.313 2035+0 records in 00:13:11.313 2035+0 records out 00:13:11.313 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0108932 s, 95.6 MB/s 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:11.313 10:23:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:11.313 456+0 records in 00:13:11.313 456+0 records out 00:13:11.313 233472 bytes (233 kB, 228 KiB) copied, 0.00273807 s, 85.3 MB/s 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:11.313 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.314 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.572 [2024-07-26 10:23:24.273878] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:11.572 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 3342562 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 3342562 ']' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 3342562 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3342562 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3342562' 00:13:11.831 killing process with pid 3342562 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 3342562 00:13:11.831 [2024-07-26 10:23:24.566946] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.831 [2024-07-26 10:23:24.567003] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.831 [2024-07-26 10:23:24.567039] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.831 [2024-07-26 10:23:24.567049] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17a1b60 name raid, state offline 00:13:11.831 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 3342562 00:13:11.831 [2024-07-26 10:23:24.582628] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.089 10:23:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:12.089 00:13:12.089 real 0m3.077s 00:13:12.089 user 0m4.048s 00:13:12.089 sys 0m1.187s 00:13:12.089 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.090 10:23:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:12.090 ************************************ 00:13:12.090 END TEST raid_function_test_concat 00:13:12.090 ************************************ 00:13:12.090 10:23:24 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:13:12.090 10:23:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:12.090 10:23:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.090 10:23:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.090 ************************************ 00:13:12.090 START TEST raid0_resize_test 00:13:12.090 ************************************ 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=3343175 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 3343175' 00:13:12.090 Process raid pid: 3343175 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 3343175 /var/tmp/spdk-raid.sock 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 3343175 ']' 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:12.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.090 10:23:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.090 [2024-07-26 10:23:24.884771] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:12.090 [2024-07-26 10:23:24.884822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:12.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:12.348 [2024-07-26 10:23:25.017751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.348 [2024-07-26 10:23:25.062219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.348 [2024-07-26 10:23:25.119987] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.348 [2024-07-26 10:23:25.120013] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.607 10:23:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.607 10:23:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:13:12.607 10:23:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:12.865 Base_1 00:13:12.865 10:23:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:13.123 Base_2 00:13:13.123 10:23:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:13:13.123 10:23:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:13.381 [2024-07-26 10:23:26.029106] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:13.381 [2024-07-26 10:23:26.030323] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:13.381 [2024-07-26 10:23:26.030370] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x26d9020 00:13:13.381 [2024-07-26 10:23:26.030379] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:13.381 [2024-07-26 10:23:26.030568] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256d180 00:13:13.381 [2024-07-26 10:23:26.030645] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26d9020 00:13:13.381 [2024-07-26 10:23:26.030653] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x26d9020 00:13:13.381 [2024-07-26 10:23:26.030750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.381 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:13.381 [2024-07-26 10:23:26.253682] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:13.381 [2024-07-26 10:23:26.253698] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:13.381 true 00:13:13.381 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:13.381 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:13:13.640 [2024-07-26 10:23:26.482429] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:13:13.640 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:13.898 [2024-07-26 10:23:26.698838] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:13.898 [2024-07-26 10:23:26.698853] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:13.898 [2024-07-26 10:23:26.698874] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:13.898 true 00:13:13.898 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:13.898 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:13:14.157 [2024-07-26 10:23:26.911555] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 3343175 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 3343175 ']' 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 3343175 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343175 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343175' 00:13:14.157 killing process with pid 3343175 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 3343175 00:13:14.157 [2024-07-26 10:23:26.990201] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.157 [2024-07-26 10:23:26.990248] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.157 [2024-07-26 10:23:26.990286] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.157 [2024-07-26 10:23:26.990296] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26d9020 name Raid, state offline 00:13:14.157 10:23:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 3343175 00:13:14.157 [2024-07-26 10:23:26.991474] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.416 10:23:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:13:14.416 00:13:14.416 real 0m2.315s 00:13:14.416 user 0m3.852s 00:13:14.416 sys 0m0.601s 00:13:14.416 10:23:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.416 10:23:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.416 ************************************ 00:13:14.416 END TEST raid0_resize_test 00:13:14.416 ************************************ 00:13:14.416 10:23:27 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:13:14.416 10:23:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.416 10:23:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.416 10:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.416 ************************************ 00:13:14.416 START TEST raid1_resize_test 00:13:14.416 ************************************ 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=3343646 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 3343646' 00:13:14.416 Process raid pid: 3343646 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 3343646 /var/tmp/spdk-raid.sock 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 3343646 ']' 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:14.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.416 10:23:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.416 [2024-07-26 10:23:27.290129] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:14.416 [2024-07-26 10:23:27.290192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:14.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.675 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:14.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.676 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:14.676 [2024-07-26 10:23:27.424782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.676 [2024-07-26 10:23:27.469737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.676 [2024-07-26 10:23:27.530021] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.676 [2024-07-26 10:23:27.530056] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.607 10:23:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.607 10:23:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:13:15.607 10:23:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:15.607 Base_1 00:13:15.607 10:23:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:15.865 Base_2 00:13:15.865 10:23:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:13:15.865 10:23:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:13:16.123 [2024-07-26 10:23:28.836996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:16.123 [2024-07-26 10:23:28.838184] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:16.123 [2024-07-26 10:23:28.838228] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eea020 00:13:16.123 [2024-07-26 10:23:28.838237] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:16.123 [2024-07-26 10:23:28.838417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d7e3f0 00:13:16.123 [2024-07-26 10:23:28.838499] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eea020 00:13:16.123 [2024-07-26 10:23:28.838508] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1eea020 00:13:16.123 [2024-07-26 10:23:28.838599] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.123 10:23:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:16.381 [2024-07-26 10:23:29.061565] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:16.381 [2024-07-26 10:23:29.061581] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:16.381 true 00:13:16.381 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:16.381 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:13:16.640 [2024-07-26 10:23:29.290314] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:13:16.640 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:16.640 [2024-07-26 10:23:29.518752] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:16.640 [2024-07-26 10:23:29.518766] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:16.640 [2024-07-26 10:23:29.518788] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:13:16.640 true 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:13:16.900 [2024-07-26 10:23:29.699368] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 3343646 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 3343646 ']' 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 3343646 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343646 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343646' 00:13:16.900 killing process with pid 3343646 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 3343646 00:13:16.900 [2024-07-26 10:23:29.780377] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.900 [2024-07-26 10:23:29.780428] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.900 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 3343646 00:13:16.900 [2024-07-26 10:23:29.780739] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.900 [2024-07-26 10:23:29.780752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eea020 name Raid, state offline 00:13:16.900 [2024-07-26 10:23:29.781642] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.159 10:23:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:13:17.159 00:13:17.159 real 0m2.707s 00:13:17.159 user 0m4.151s 00:13:17.159 sys 0m0.594s 00:13:17.159 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.159 10:23:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 ************************************ 00:13:17.159 END TEST raid1_resize_test 00:13:17.159 ************************************ 00:13:17.159 10:23:29 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:13:17.159 10:23:29 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:17.159 10:23:29 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:17.159 10:23:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:17.159 10:23:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.159 10:23:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 ************************************ 00:13:17.159 START TEST raid_state_function_test 00:13:17.160 ************************************ 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3344055 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3344055' 00:13:17.160 Process raid pid: 3344055 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3344055 /var/tmp/spdk-raid.sock 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3344055 ']' 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:17.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.160 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.420 [2024-07-26 10:23:30.096741] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:17.420 [2024-07-26 10:23:30.096799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:17.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:17.420 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:17.420 [2024-07-26 10:23:30.231089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.420 [2024-07-26 10:23:30.275461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.679 [2024-07-26 10:23:30.336860] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.680 [2024-07-26 10:23:30.336887] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.247 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.247 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:18.247 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:18.247 [2024-07-26 10:23:31.129978] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.247 [2024-07-26 10:23:31.130020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.247 [2024-07-26 10:23:31.130030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.247 [2024-07-26 10:23:31.130041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.506 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.507 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.507 "name": "Existed_Raid", 00:13:18.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.507 "strip_size_kb": 64, 00:13:18.507 "state": "configuring", 00:13:18.507 "raid_level": "raid0", 00:13:18.507 "superblock": false, 00:13:18.507 "num_base_bdevs": 2, 00:13:18.507 "num_base_bdevs_discovered": 0, 00:13:18.507 "num_base_bdevs_operational": 2, 00:13:18.507 "base_bdevs_list": [ 00:13:18.507 { 00:13:18.507 "name": "BaseBdev1", 00:13:18.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.507 "is_configured": false, 00:13:18.507 "data_offset": 0, 00:13:18.507 "data_size": 0 00:13:18.507 }, 00:13:18.507 { 00:13:18.507 "name": "BaseBdev2", 00:13:18.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.507 "is_configured": false, 00:13:18.507 "data_offset": 0, 00:13:18.507 "data_size": 0 00:13:18.507 } 00:13:18.507 ] 00:13:18.507 }' 00:13:18.507 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.507 10:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.074 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:19.366 [2024-07-26 10:23:32.152566] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.366 [2024-07-26 10:23:32.152601] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc8cd00 name Existed_Raid, state configuring 00:13:19.366 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:19.653 [2024-07-26 10:23:32.365133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.653 [2024-07-26 10:23:32.365169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.653 [2024-07-26 10:23:32.365179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.653 [2024-07-26 10:23:32.365190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.653 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.911 [2024-07-26 10:23:32.599266] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.911 BaseBdev1 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.911 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.169 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.169 [ 00:13:20.169 { 00:13:20.169 "name": "BaseBdev1", 00:13:20.169 "aliases": [ 00:13:20.169 "57110223-743f-406b-a45c-7c0b379c63c4" 00:13:20.169 ], 00:13:20.169 "product_name": "Malloc disk", 00:13:20.169 "block_size": 512, 00:13:20.169 "num_blocks": 65536, 00:13:20.169 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:20.169 "assigned_rate_limits": { 00:13:20.169 "rw_ios_per_sec": 0, 00:13:20.170 "rw_mbytes_per_sec": 0, 00:13:20.170 "r_mbytes_per_sec": 0, 00:13:20.170 "w_mbytes_per_sec": 0 00:13:20.170 }, 00:13:20.170 "claimed": true, 00:13:20.170 "claim_type": "exclusive_write", 00:13:20.170 "zoned": false, 00:13:20.170 "supported_io_types": { 00:13:20.170 "read": true, 00:13:20.170 "write": true, 00:13:20.170 "unmap": true, 00:13:20.170 "flush": true, 00:13:20.170 "reset": true, 00:13:20.170 "nvme_admin": false, 00:13:20.170 "nvme_io": false, 00:13:20.170 "nvme_io_md": false, 00:13:20.170 "write_zeroes": true, 00:13:20.170 "zcopy": true, 00:13:20.170 "get_zone_info": false, 00:13:20.170 "zone_management": false, 00:13:20.170 "zone_append": false, 00:13:20.170 "compare": false, 00:13:20.170 "compare_and_write": false, 00:13:20.170 "abort": true, 00:13:20.170 "seek_hole": false, 00:13:20.170 "seek_data": false, 00:13:20.170 "copy": true, 00:13:20.170 "nvme_iov_md": false 00:13:20.170 }, 00:13:20.170 "memory_domains": [ 00:13:20.170 { 00:13:20.170 "dma_device_id": "system", 00:13:20.170 "dma_device_type": 1 00:13:20.170 }, 00:13:20.170 { 00:13:20.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.170 "dma_device_type": 2 00:13:20.170 } 00:13:20.170 ], 00:13:20.170 "driver_specific": {} 00:13:20.170 } 00:13:20.170 ] 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.170 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.427 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.427 "name": "Existed_Raid", 00:13:20.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.427 "strip_size_kb": 64, 00:13:20.427 "state": "configuring", 00:13:20.427 "raid_level": "raid0", 00:13:20.427 "superblock": false, 00:13:20.427 "num_base_bdevs": 2, 00:13:20.427 "num_base_bdevs_discovered": 1, 00:13:20.427 "num_base_bdevs_operational": 2, 00:13:20.427 "base_bdevs_list": [ 00:13:20.427 { 00:13:20.427 "name": "BaseBdev1", 00:13:20.427 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:20.427 "is_configured": true, 00:13:20.427 "data_offset": 0, 00:13:20.427 "data_size": 65536 00:13:20.427 }, 00:13:20.427 { 00:13:20.427 "name": "BaseBdev2", 00:13:20.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.427 "is_configured": false, 00:13:20.427 "data_offset": 0, 00:13:20.427 "data_size": 0 00:13:20.427 } 00:13:20.427 ] 00:13:20.427 }' 00:13:20.427 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.427 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.997 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:21.254 [2024-07-26 10:23:34.047071] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.254 [2024-07-26 10:23:34.047112] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc8c630 name Existed_Raid, state configuring 00:13:21.254 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:21.512 [2024-07-26 10:23:34.219553] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.512 [2024-07-26 10:23:34.220878] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.512 [2024-07-26 10:23:34.220910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.512 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.770 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.770 "name": "Existed_Raid", 00:13:21.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.770 "strip_size_kb": 64, 00:13:21.770 "state": "configuring", 00:13:21.770 "raid_level": "raid0", 00:13:21.770 "superblock": false, 00:13:21.770 "num_base_bdevs": 2, 00:13:21.770 "num_base_bdevs_discovered": 1, 00:13:21.770 "num_base_bdevs_operational": 2, 00:13:21.770 "base_bdevs_list": [ 00:13:21.770 { 00:13:21.770 "name": "BaseBdev1", 00:13:21.770 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:21.770 "is_configured": true, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 65536 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "name": "BaseBdev2", 00:13:21.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.770 "is_configured": false, 00:13:21.770 "data_offset": 0, 00:13:21.770 "data_size": 0 00:13:21.770 } 00:13:21.771 ] 00:13:21.771 }' 00:13:21.771 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.771 10:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.336 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.595 [2024-07-26 10:23:35.293781] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.595 [2024-07-26 10:23:35.293815] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe3f190 00:13:22.595 [2024-07-26 10:23:35.293823] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:22.595 [2024-07-26 10:23:35.294047] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc8f790 00:13:22.595 [2024-07-26 10:23:35.294163] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe3f190 00:13:22.595 [2024-07-26 10:23:35.294174] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe3f190 00:13:22.595 [2024-07-26 10:23:35.294341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.595 BaseBdev2 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:22.595 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.854 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.854 [ 00:13:22.854 { 00:13:22.854 "name": "BaseBdev2", 00:13:22.854 "aliases": [ 00:13:22.854 "082cbf7e-9e67-45ec-b064-687931844f1b" 00:13:22.854 ], 00:13:22.854 "product_name": "Malloc disk", 00:13:22.854 "block_size": 512, 00:13:22.854 "num_blocks": 65536, 00:13:22.854 "uuid": "082cbf7e-9e67-45ec-b064-687931844f1b", 00:13:22.854 "assigned_rate_limits": { 00:13:22.854 "rw_ios_per_sec": 0, 00:13:22.854 "rw_mbytes_per_sec": 0, 00:13:22.854 "r_mbytes_per_sec": 0, 00:13:22.854 "w_mbytes_per_sec": 0 00:13:22.854 }, 00:13:22.854 "claimed": true, 00:13:22.854 "claim_type": "exclusive_write", 00:13:22.854 "zoned": false, 00:13:22.854 "supported_io_types": { 00:13:22.854 "read": true, 00:13:22.854 "write": true, 00:13:22.854 "unmap": true, 00:13:22.854 "flush": true, 00:13:22.854 "reset": true, 00:13:22.854 "nvme_admin": false, 00:13:22.854 "nvme_io": false, 00:13:22.854 "nvme_io_md": false, 00:13:22.854 "write_zeroes": true, 00:13:22.854 "zcopy": true, 00:13:22.854 "get_zone_info": false, 00:13:22.854 "zone_management": false, 00:13:22.854 "zone_append": false, 00:13:22.854 "compare": false, 00:13:22.854 "compare_and_write": false, 00:13:22.854 "abort": true, 00:13:22.854 "seek_hole": false, 00:13:22.854 "seek_data": false, 00:13:22.854 "copy": true, 00:13:22.854 "nvme_iov_md": false 00:13:22.854 }, 00:13:22.854 "memory_domains": [ 00:13:22.854 { 00:13:22.854 "dma_device_id": "system", 00:13:22.854 "dma_device_type": 1 00:13:22.854 }, 00:13:22.854 { 00:13:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.854 "dma_device_type": 2 00:13:22.854 } 00:13:22.854 ], 00:13:22.854 "driver_specific": {} 00:13:22.854 } 00:13:22.854 ] 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.113 "name": "Existed_Raid", 00:13:23.113 "uuid": "37488fb5-e0c3-41d0-a103-7b16ae296bfa", 00:13:23.113 "strip_size_kb": 64, 00:13:23.113 "state": "online", 00:13:23.113 "raid_level": "raid0", 00:13:23.113 "superblock": false, 00:13:23.113 "num_base_bdevs": 2, 00:13:23.113 "num_base_bdevs_discovered": 2, 00:13:23.113 "num_base_bdevs_operational": 2, 00:13:23.113 "base_bdevs_list": [ 00:13:23.113 { 00:13:23.113 "name": "BaseBdev1", 00:13:23.113 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:23.113 "is_configured": true, 00:13:23.113 "data_offset": 0, 00:13:23.113 "data_size": 65536 00:13:23.113 }, 00:13:23.113 { 00:13:23.113 "name": "BaseBdev2", 00:13:23.113 "uuid": "082cbf7e-9e67-45ec-b064-687931844f1b", 00:13:23.113 "is_configured": true, 00:13:23.113 "data_offset": 0, 00:13:23.113 "data_size": 65536 00:13:23.113 } 00:13:23.113 ] 00:13:23.113 }' 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.113 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.681 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:23.681 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:23.681 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:23.681 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:23.940 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:23.940 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:23.940 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:23.940 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:23.940 [2024-07-26 10:23:36.793976] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.940 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:23.940 "name": "Existed_Raid", 00:13:23.940 "aliases": [ 00:13:23.940 "37488fb5-e0c3-41d0-a103-7b16ae296bfa" 00:13:23.940 ], 00:13:23.940 "product_name": "Raid Volume", 00:13:23.940 "block_size": 512, 00:13:23.940 "num_blocks": 131072, 00:13:23.940 "uuid": "37488fb5-e0c3-41d0-a103-7b16ae296bfa", 00:13:23.940 "assigned_rate_limits": { 00:13:23.940 "rw_ios_per_sec": 0, 00:13:23.940 "rw_mbytes_per_sec": 0, 00:13:23.940 "r_mbytes_per_sec": 0, 00:13:23.940 "w_mbytes_per_sec": 0 00:13:23.941 }, 00:13:23.941 "claimed": false, 00:13:23.941 "zoned": false, 00:13:23.941 "supported_io_types": { 00:13:23.941 "read": true, 00:13:23.941 "write": true, 00:13:23.941 "unmap": true, 00:13:23.941 "flush": true, 00:13:23.941 "reset": true, 00:13:23.941 "nvme_admin": false, 00:13:23.941 "nvme_io": false, 00:13:23.941 "nvme_io_md": false, 00:13:23.941 "write_zeroes": true, 00:13:23.941 "zcopy": false, 00:13:23.941 "get_zone_info": false, 00:13:23.941 "zone_management": false, 00:13:23.941 "zone_append": false, 00:13:23.941 "compare": false, 00:13:23.941 "compare_and_write": false, 00:13:23.941 "abort": false, 00:13:23.941 "seek_hole": false, 00:13:23.941 "seek_data": false, 00:13:23.941 "copy": false, 00:13:23.941 "nvme_iov_md": false 00:13:23.941 }, 00:13:23.941 "memory_domains": [ 00:13:23.941 { 00:13:23.941 "dma_device_id": "system", 00:13:23.941 "dma_device_type": 1 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.941 "dma_device_type": 2 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "dma_device_id": "system", 00:13:23.941 "dma_device_type": 1 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.941 "dma_device_type": 2 00:13:23.941 } 00:13:23.941 ], 00:13:23.941 "driver_specific": { 00:13:23.941 "raid": { 00:13:23.941 "uuid": "37488fb5-e0c3-41d0-a103-7b16ae296bfa", 00:13:23.941 "strip_size_kb": 64, 00:13:23.941 "state": "online", 00:13:23.941 "raid_level": "raid0", 00:13:23.941 "superblock": false, 00:13:23.941 "num_base_bdevs": 2, 00:13:23.941 "num_base_bdevs_discovered": 2, 00:13:23.941 "num_base_bdevs_operational": 2, 00:13:23.941 "base_bdevs_list": [ 00:13:23.941 { 00:13:23.941 "name": "BaseBdev1", 00:13:23.941 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:23.941 "is_configured": true, 00:13:23.941 "data_offset": 0, 00:13:23.941 "data_size": 65536 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "name": "BaseBdev2", 00:13:23.941 "uuid": "082cbf7e-9e67-45ec-b064-687931844f1b", 00:13:23.941 "is_configured": true, 00:13:23.941 "data_offset": 0, 00:13:23.941 "data_size": 65536 00:13:23.941 } 00:13:23.941 ] 00:13:23.941 } 00:13:23.941 } 00:13:23.941 }' 00:13:23.941 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.200 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:24.200 BaseBdev2' 00:13:24.200 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:24.200 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:24.200 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.200 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.200 "name": "BaseBdev1", 00:13:24.200 "aliases": [ 00:13:24.200 "57110223-743f-406b-a45c-7c0b379c63c4" 00:13:24.200 ], 00:13:24.200 "product_name": "Malloc disk", 00:13:24.200 "block_size": 512, 00:13:24.200 "num_blocks": 65536, 00:13:24.200 "uuid": "57110223-743f-406b-a45c-7c0b379c63c4", 00:13:24.200 "assigned_rate_limits": { 00:13:24.200 "rw_ios_per_sec": 0, 00:13:24.200 "rw_mbytes_per_sec": 0, 00:13:24.200 "r_mbytes_per_sec": 0, 00:13:24.200 "w_mbytes_per_sec": 0 00:13:24.200 }, 00:13:24.200 "claimed": true, 00:13:24.200 "claim_type": "exclusive_write", 00:13:24.200 "zoned": false, 00:13:24.200 "supported_io_types": { 00:13:24.200 "read": true, 00:13:24.200 "write": true, 00:13:24.200 "unmap": true, 00:13:24.200 "flush": true, 00:13:24.200 "reset": true, 00:13:24.200 "nvme_admin": false, 00:13:24.200 "nvme_io": false, 00:13:24.200 "nvme_io_md": false, 00:13:24.200 "write_zeroes": true, 00:13:24.200 "zcopy": true, 00:13:24.200 "get_zone_info": false, 00:13:24.200 "zone_management": false, 00:13:24.200 "zone_append": false, 00:13:24.200 "compare": false, 00:13:24.200 "compare_and_write": false, 00:13:24.200 "abort": true, 00:13:24.200 "seek_hole": false, 00:13:24.200 "seek_data": false, 00:13:24.200 "copy": true, 00:13:24.200 "nvme_iov_md": false 00:13:24.200 }, 00:13:24.200 "memory_domains": [ 00:13:24.200 { 00:13:24.200 "dma_device_id": "system", 00:13:24.200 "dma_device_type": 1 00:13:24.200 }, 00:13:24.200 { 00:13:24.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.200 "dma_device_type": 2 00:13:24.200 } 00:13:24.200 ], 00:13:24.200 "driver_specific": {} 00:13:24.200 }' 00:13:24.200 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.459 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.460 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.460 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.719 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.719 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:24.719 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:24.719 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:24.719 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.978 "name": "BaseBdev2", 00:13:24.978 "aliases": [ 00:13:24.978 "082cbf7e-9e67-45ec-b064-687931844f1b" 00:13:24.978 ], 00:13:24.978 "product_name": "Malloc disk", 00:13:24.978 "block_size": 512, 00:13:24.978 "num_blocks": 65536, 00:13:24.978 "uuid": "082cbf7e-9e67-45ec-b064-687931844f1b", 00:13:24.978 "assigned_rate_limits": { 00:13:24.978 "rw_ios_per_sec": 0, 00:13:24.978 "rw_mbytes_per_sec": 0, 00:13:24.978 "r_mbytes_per_sec": 0, 00:13:24.978 "w_mbytes_per_sec": 0 00:13:24.978 }, 00:13:24.978 "claimed": true, 00:13:24.978 "claim_type": "exclusive_write", 00:13:24.978 "zoned": false, 00:13:24.978 "supported_io_types": { 00:13:24.978 "read": true, 00:13:24.978 "write": true, 00:13:24.978 "unmap": true, 00:13:24.978 "flush": true, 00:13:24.978 "reset": true, 00:13:24.978 "nvme_admin": false, 00:13:24.978 "nvme_io": false, 00:13:24.978 "nvme_io_md": false, 00:13:24.978 "write_zeroes": true, 00:13:24.978 "zcopy": true, 00:13:24.978 "get_zone_info": false, 00:13:24.978 "zone_management": false, 00:13:24.978 "zone_append": false, 00:13:24.978 "compare": false, 00:13:24.978 "compare_and_write": false, 00:13:24.978 "abort": true, 00:13:24.978 "seek_hole": false, 00:13:24.978 "seek_data": false, 00:13:24.978 "copy": true, 00:13:24.978 "nvme_iov_md": false 00:13:24.978 }, 00:13:24.978 "memory_domains": [ 00:13:24.978 { 00:13:24.978 "dma_device_id": "system", 00:13:24.978 "dma_device_type": 1 00:13:24.978 }, 00:13:24.978 { 00:13:24.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.978 "dma_device_type": 2 00:13:24.978 } 00:13:24.978 ], 00:13:24.978 "driver_specific": {} 00:13:24.978 }' 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.978 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.238 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:25.238 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:25.238 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:25.498 [2024-07-26 10:23:38.149536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.498 [2024-07-26 10:23:38.149562] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.498 [2024-07-26 10:23:38.149601] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.498 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.757 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.757 "name": "Existed_Raid", 00:13:25.757 "uuid": "37488fb5-e0c3-41d0-a103-7b16ae296bfa", 00:13:25.757 "strip_size_kb": 64, 00:13:25.757 "state": "offline", 00:13:25.757 "raid_level": "raid0", 00:13:25.757 "superblock": false, 00:13:25.757 "num_base_bdevs": 2, 00:13:25.757 "num_base_bdevs_discovered": 1, 00:13:25.757 "num_base_bdevs_operational": 1, 00:13:25.757 "base_bdevs_list": [ 00:13:25.757 { 00:13:25.757 "name": null, 00:13:25.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.757 "is_configured": false, 00:13:25.757 "data_offset": 0, 00:13:25.757 "data_size": 65536 00:13:25.757 }, 00:13:25.757 { 00:13:25.757 "name": "BaseBdev2", 00:13:25.757 "uuid": "082cbf7e-9e67-45ec-b064-687931844f1b", 00:13:25.757 "is_configured": true, 00:13:25.757 "data_offset": 0, 00:13:25.757 "data_size": 65536 00:13:25.757 } 00:13:25.757 ] 00:13:25.757 }' 00:13:25.757 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.757 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.325 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:26.325 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.325 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.325 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:26.325 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:26.325 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.325 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:26.584 [2024-07-26 10:23:39.393769] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.585 [2024-07-26 10:23:39.393819] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe3f190 name Existed_Raid, state offline 00:13:26.585 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:26.585 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.585 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.585 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3344055 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3344055 ']' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3344055 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3344055 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3344055' 00:13:26.844 killing process with pid 3344055 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3344055 00:13:26.844 [2024-07-26 10:23:39.694707] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.844 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3344055 00:13:26.844 [2024-07-26 10:23:39.695565] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.104 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:27.104 00:13:27.104 real 0m9.840s 00:13:27.104 user 0m17.504s 00:13:27.104 sys 0m1.836s 00:13:27.104 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.104 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.104 ************************************ 00:13:27.104 END TEST raid_state_function_test 00:13:27.104 ************************************ 00:13:27.104 10:23:39 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:27.104 10:23:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:27.104 10:23:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.104 10:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.104 ************************************ 00:13:27.105 START TEST raid_state_function_test_sb 00:13:27.105 ************************************ 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3346081 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3346081' 00:13:27.105 Process raid pid: 3346081 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3346081 /var/tmp/spdk-raid.sock 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3346081 ']' 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:27.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.105 10:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.365 [2024-07-26 10:23:40.063850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:27.365 [2024-07-26 10:23:40.063984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:27.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:27.365 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:27.625 [2024-07-26 10:23:40.276100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.625 [2024-07-26 10:23:40.319396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.625 [2024-07-26 10:23:40.381219] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.625 [2024-07-26 10:23:40.381255] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.193 10:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.193 10:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:28.193 10:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:28.453 [2024-07-26 10:23:41.110663] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.453 [2024-07-26 10:23:41.110700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.453 [2024-07-26 10:23:41.110710] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.453 [2024-07-26 10:23:41.110721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.453 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.712 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.712 "name": "Existed_Raid", 00:13:28.712 "uuid": "4e731c38-6b10-487e-ada4-67eb0e204dd0", 00:13:28.712 "strip_size_kb": 64, 00:13:28.712 "state": "configuring", 00:13:28.712 "raid_level": "raid0", 00:13:28.712 "superblock": true, 00:13:28.712 "num_base_bdevs": 2, 00:13:28.712 "num_base_bdevs_discovered": 0, 00:13:28.712 "num_base_bdevs_operational": 2, 00:13:28.712 "base_bdevs_list": [ 00:13:28.712 { 00:13:28.712 "name": "BaseBdev1", 00:13:28.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.712 "is_configured": false, 00:13:28.712 "data_offset": 0, 00:13:28.712 "data_size": 0 00:13:28.712 }, 00:13:28.712 { 00:13:28.712 "name": "BaseBdev2", 00:13:28.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.712 "is_configured": false, 00:13:28.712 "data_offset": 0, 00:13:28.712 "data_size": 0 00:13:28.712 } 00:13:28.712 ] 00:13:28.712 }' 00:13:28.712 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.712 10:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.281 10:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:29.281 [2024-07-26 10:23:42.145264] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.281 [2024-07-26 10:23:42.145299] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a56d00 name Existed_Raid, state configuring 00:13:29.281 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:29.540 [2024-07-26 10:23:42.373878] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.540 [2024-07-26 10:23:42.373905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.540 [2024-07-26 10:23:42.373914] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.540 [2024-07-26 10:23:42.373925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.540 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.799 [2024-07-26 10:23:42.611916] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.799 BaseBdev1 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:29.799 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.058 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.318 [ 00:13:30.318 { 00:13:30.318 "name": "BaseBdev1", 00:13:30.318 "aliases": [ 00:13:30.318 "2aaa13a8-5762-46a8-b252-6a4be9ac71f6" 00:13:30.318 ], 00:13:30.318 "product_name": "Malloc disk", 00:13:30.318 "block_size": 512, 00:13:30.318 "num_blocks": 65536, 00:13:30.318 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:30.318 "assigned_rate_limits": { 00:13:30.318 "rw_ios_per_sec": 0, 00:13:30.318 "rw_mbytes_per_sec": 0, 00:13:30.318 "r_mbytes_per_sec": 0, 00:13:30.318 "w_mbytes_per_sec": 0 00:13:30.318 }, 00:13:30.318 "claimed": true, 00:13:30.318 "claim_type": "exclusive_write", 00:13:30.318 "zoned": false, 00:13:30.318 "supported_io_types": { 00:13:30.318 "read": true, 00:13:30.318 "write": true, 00:13:30.318 "unmap": true, 00:13:30.318 "flush": true, 00:13:30.318 "reset": true, 00:13:30.318 "nvme_admin": false, 00:13:30.318 "nvme_io": false, 00:13:30.318 "nvme_io_md": false, 00:13:30.318 "write_zeroes": true, 00:13:30.318 "zcopy": true, 00:13:30.318 "get_zone_info": false, 00:13:30.318 "zone_management": false, 00:13:30.318 "zone_append": false, 00:13:30.318 "compare": false, 00:13:30.318 "compare_and_write": false, 00:13:30.318 "abort": true, 00:13:30.318 "seek_hole": false, 00:13:30.318 "seek_data": false, 00:13:30.318 "copy": true, 00:13:30.318 "nvme_iov_md": false 00:13:30.318 }, 00:13:30.318 "memory_domains": [ 00:13:30.318 { 00:13:30.318 "dma_device_id": "system", 00:13:30.318 "dma_device_type": 1 00:13:30.318 }, 00:13:30.318 { 00:13:30.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.318 "dma_device_type": 2 00:13:30.318 } 00:13:30.318 ], 00:13:30.318 "driver_specific": {} 00:13:30.318 } 00:13:30.318 ] 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.318 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.578 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.578 "name": "Existed_Raid", 00:13:30.578 "uuid": "3bda2532-831f-4937-9c76-dfdbdf9ff752", 00:13:30.578 "strip_size_kb": 64, 00:13:30.578 "state": "configuring", 00:13:30.578 "raid_level": "raid0", 00:13:30.578 "superblock": true, 00:13:30.578 "num_base_bdevs": 2, 00:13:30.578 "num_base_bdevs_discovered": 1, 00:13:30.578 "num_base_bdevs_operational": 2, 00:13:30.578 "base_bdevs_list": [ 00:13:30.578 { 00:13:30.578 "name": "BaseBdev1", 00:13:30.578 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:30.578 "is_configured": true, 00:13:30.578 "data_offset": 2048, 00:13:30.578 "data_size": 63488 00:13:30.578 }, 00:13:30.578 { 00:13:30.578 "name": "BaseBdev2", 00:13:30.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.578 "is_configured": false, 00:13:30.578 "data_offset": 0, 00:13:30.578 "data_size": 0 00:13:30.578 } 00:13:30.578 ] 00:13:30.578 }' 00:13:30.578 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.578 10:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.145 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:31.404 [2024-07-26 10:23:44.075796] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.404 [2024-07-26 10:23:44.075827] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a56630 name Existed_Raid, state configuring 00:13:31.404 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:31.404 [2024-07-26 10:23:44.304432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.404 [2024-07-26 10:23:44.305752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.404 [2024-07-26 10:23:44.305781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.662 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:31.662 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.663 "name": "Existed_Raid", 00:13:31.663 "uuid": "d2ddce33-2ca2-4466-9ea9-025403a586bf", 00:13:31.663 "strip_size_kb": 64, 00:13:31.663 "state": "configuring", 00:13:31.663 "raid_level": "raid0", 00:13:31.663 "superblock": true, 00:13:31.663 "num_base_bdevs": 2, 00:13:31.663 "num_base_bdevs_discovered": 1, 00:13:31.663 "num_base_bdevs_operational": 2, 00:13:31.663 "base_bdevs_list": [ 00:13:31.663 { 00:13:31.663 "name": "BaseBdev1", 00:13:31.663 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:31.663 "is_configured": true, 00:13:31.663 "data_offset": 2048, 00:13:31.663 "data_size": 63488 00:13:31.663 }, 00:13:31.663 { 00:13:31.663 "name": "BaseBdev2", 00:13:31.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.663 "is_configured": false, 00:13:31.663 "data_offset": 0, 00:13:31.663 "data_size": 0 00:13:31.663 } 00:13:31.663 ] 00:13:31.663 }' 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.663 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.247 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.542 [2024-07-26 10:23:45.262079] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.542 [2024-07-26 10:23:45.262217] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c09190 00:13:32.542 [2024-07-26 10:23:45.262229] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:32.542 [2024-07-26 10:23:45.262385] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0b190 00:13:32.542 [2024-07-26 10:23:45.262484] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c09190 00:13:32.542 [2024-07-26 10:23:45.262493] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c09190 00:13:32.542 [2024-07-26 10:23:45.262576] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.542 BaseBdev2 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:32.542 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.801 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.060 [ 00:13:33.060 { 00:13:33.060 "name": "BaseBdev2", 00:13:33.060 "aliases": [ 00:13:33.060 "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f" 00:13:33.060 ], 00:13:33.060 "product_name": "Malloc disk", 00:13:33.060 "block_size": 512, 00:13:33.060 "num_blocks": 65536, 00:13:33.060 "uuid": "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f", 00:13:33.060 "assigned_rate_limits": { 00:13:33.060 "rw_ios_per_sec": 0, 00:13:33.060 "rw_mbytes_per_sec": 0, 00:13:33.060 "r_mbytes_per_sec": 0, 00:13:33.060 "w_mbytes_per_sec": 0 00:13:33.060 }, 00:13:33.060 "claimed": true, 00:13:33.060 "claim_type": "exclusive_write", 00:13:33.060 "zoned": false, 00:13:33.060 "supported_io_types": { 00:13:33.060 "read": true, 00:13:33.060 "write": true, 00:13:33.060 "unmap": true, 00:13:33.060 "flush": true, 00:13:33.060 "reset": true, 00:13:33.060 "nvme_admin": false, 00:13:33.060 "nvme_io": false, 00:13:33.060 "nvme_io_md": false, 00:13:33.060 "write_zeroes": true, 00:13:33.060 "zcopy": true, 00:13:33.060 "get_zone_info": false, 00:13:33.060 "zone_management": false, 00:13:33.060 "zone_append": false, 00:13:33.060 "compare": false, 00:13:33.060 "compare_and_write": false, 00:13:33.060 "abort": true, 00:13:33.060 "seek_hole": false, 00:13:33.060 "seek_data": false, 00:13:33.060 "copy": true, 00:13:33.060 "nvme_iov_md": false 00:13:33.060 }, 00:13:33.060 "memory_domains": [ 00:13:33.060 { 00:13:33.060 "dma_device_id": "system", 00:13:33.060 "dma_device_type": 1 00:13:33.060 }, 00:13:33.060 { 00:13:33.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.060 "dma_device_type": 2 00:13:33.060 } 00:13:33.060 ], 00:13:33.060 "driver_specific": {} 00:13:33.060 } 00:13:33.060 ] 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.060 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.319 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.319 "name": "Existed_Raid", 00:13:33.319 "uuid": "d2ddce33-2ca2-4466-9ea9-025403a586bf", 00:13:33.319 "strip_size_kb": 64, 00:13:33.319 "state": "online", 00:13:33.319 "raid_level": "raid0", 00:13:33.319 "superblock": true, 00:13:33.319 "num_base_bdevs": 2, 00:13:33.319 "num_base_bdevs_discovered": 2, 00:13:33.319 "num_base_bdevs_operational": 2, 00:13:33.319 "base_bdevs_list": [ 00:13:33.319 { 00:13:33.319 "name": "BaseBdev1", 00:13:33.319 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:33.319 "is_configured": true, 00:13:33.319 "data_offset": 2048, 00:13:33.319 "data_size": 63488 00:13:33.319 }, 00:13:33.319 { 00:13:33.319 "name": "BaseBdev2", 00:13:33.319 "uuid": "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f", 00:13:33.319 "is_configured": true, 00:13:33.319 "data_offset": 2048, 00:13:33.319 "data_size": 63488 00:13:33.319 } 00:13:33.319 ] 00:13:33.319 }' 00:13:33.319 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.319 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:33.887 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:33.887 [2024-07-26 10:23:46.774306] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.146 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:34.146 "name": "Existed_Raid", 00:13:34.146 "aliases": [ 00:13:34.146 "d2ddce33-2ca2-4466-9ea9-025403a586bf" 00:13:34.146 ], 00:13:34.146 "product_name": "Raid Volume", 00:13:34.146 "block_size": 512, 00:13:34.146 "num_blocks": 126976, 00:13:34.146 "uuid": "d2ddce33-2ca2-4466-9ea9-025403a586bf", 00:13:34.146 "assigned_rate_limits": { 00:13:34.146 "rw_ios_per_sec": 0, 00:13:34.146 "rw_mbytes_per_sec": 0, 00:13:34.146 "r_mbytes_per_sec": 0, 00:13:34.146 "w_mbytes_per_sec": 0 00:13:34.146 }, 00:13:34.146 "claimed": false, 00:13:34.146 "zoned": false, 00:13:34.146 "supported_io_types": { 00:13:34.146 "read": true, 00:13:34.146 "write": true, 00:13:34.146 "unmap": true, 00:13:34.146 "flush": true, 00:13:34.146 "reset": true, 00:13:34.146 "nvme_admin": false, 00:13:34.146 "nvme_io": false, 00:13:34.146 "nvme_io_md": false, 00:13:34.146 "write_zeroes": true, 00:13:34.146 "zcopy": false, 00:13:34.146 "get_zone_info": false, 00:13:34.146 "zone_management": false, 00:13:34.146 "zone_append": false, 00:13:34.146 "compare": false, 00:13:34.146 "compare_and_write": false, 00:13:34.147 "abort": false, 00:13:34.147 "seek_hole": false, 00:13:34.147 "seek_data": false, 00:13:34.147 "copy": false, 00:13:34.147 "nvme_iov_md": false 00:13:34.147 }, 00:13:34.147 "memory_domains": [ 00:13:34.147 { 00:13:34.147 "dma_device_id": "system", 00:13:34.147 "dma_device_type": 1 00:13:34.147 }, 00:13:34.147 { 00:13:34.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.147 "dma_device_type": 2 00:13:34.147 }, 00:13:34.147 { 00:13:34.147 "dma_device_id": "system", 00:13:34.147 "dma_device_type": 1 00:13:34.147 }, 00:13:34.147 { 00:13:34.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.147 "dma_device_type": 2 00:13:34.147 } 00:13:34.147 ], 00:13:34.147 "driver_specific": { 00:13:34.147 "raid": { 00:13:34.147 "uuid": "d2ddce33-2ca2-4466-9ea9-025403a586bf", 00:13:34.147 "strip_size_kb": 64, 00:13:34.147 "state": "online", 00:13:34.147 "raid_level": "raid0", 00:13:34.147 "superblock": true, 00:13:34.147 "num_base_bdevs": 2, 00:13:34.147 "num_base_bdevs_discovered": 2, 00:13:34.147 "num_base_bdevs_operational": 2, 00:13:34.147 "base_bdevs_list": [ 00:13:34.147 { 00:13:34.147 "name": "BaseBdev1", 00:13:34.147 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:34.147 "is_configured": true, 00:13:34.147 "data_offset": 2048, 00:13:34.147 "data_size": 63488 00:13:34.147 }, 00:13:34.147 { 00:13:34.147 "name": "BaseBdev2", 00:13:34.147 "uuid": "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f", 00:13:34.147 "is_configured": true, 00:13:34.147 "data_offset": 2048, 00:13:34.147 "data_size": 63488 00:13:34.147 } 00:13:34.147 ] 00:13:34.147 } 00:13:34.147 } 00:13:34.147 }' 00:13:34.147 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:34.147 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:34.147 BaseBdev2' 00:13:34.147 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.147 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:34.147 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.406 "name": "BaseBdev1", 00:13:34.406 "aliases": [ 00:13:34.406 "2aaa13a8-5762-46a8-b252-6a4be9ac71f6" 00:13:34.406 ], 00:13:34.406 "product_name": "Malloc disk", 00:13:34.406 "block_size": 512, 00:13:34.406 "num_blocks": 65536, 00:13:34.406 "uuid": "2aaa13a8-5762-46a8-b252-6a4be9ac71f6", 00:13:34.406 "assigned_rate_limits": { 00:13:34.406 "rw_ios_per_sec": 0, 00:13:34.406 "rw_mbytes_per_sec": 0, 00:13:34.406 "r_mbytes_per_sec": 0, 00:13:34.406 "w_mbytes_per_sec": 0 00:13:34.406 }, 00:13:34.406 "claimed": true, 00:13:34.406 "claim_type": "exclusive_write", 00:13:34.406 "zoned": false, 00:13:34.406 "supported_io_types": { 00:13:34.406 "read": true, 00:13:34.406 "write": true, 00:13:34.406 "unmap": true, 00:13:34.406 "flush": true, 00:13:34.406 "reset": true, 00:13:34.406 "nvme_admin": false, 00:13:34.406 "nvme_io": false, 00:13:34.406 "nvme_io_md": false, 00:13:34.406 "write_zeroes": true, 00:13:34.406 "zcopy": true, 00:13:34.406 "get_zone_info": false, 00:13:34.406 "zone_management": false, 00:13:34.406 "zone_append": false, 00:13:34.406 "compare": false, 00:13:34.406 "compare_and_write": false, 00:13:34.406 "abort": true, 00:13:34.406 "seek_hole": false, 00:13:34.406 "seek_data": false, 00:13:34.406 "copy": true, 00:13:34.406 "nvme_iov_md": false 00:13:34.406 }, 00:13:34.406 "memory_domains": [ 00:13:34.406 { 00:13:34.406 "dma_device_id": "system", 00:13:34.406 "dma_device_type": 1 00:13:34.406 }, 00:13:34.406 { 00:13:34.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.406 "dma_device_type": 2 00:13:34.406 } 00:13:34.406 ], 00:13:34.406 "driver_specific": {} 00:13:34.406 }' 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.406 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:34.666 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.925 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.925 "name": "BaseBdev2", 00:13:34.925 "aliases": [ 00:13:34.925 "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f" 00:13:34.925 ], 00:13:34.925 "product_name": "Malloc disk", 00:13:34.925 "block_size": 512, 00:13:34.925 "num_blocks": 65536, 00:13:34.925 "uuid": "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f", 00:13:34.925 "assigned_rate_limits": { 00:13:34.925 "rw_ios_per_sec": 0, 00:13:34.925 "rw_mbytes_per_sec": 0, 00:13:34.925 "r_mbytes_per_sec": 0, 00:13:34.925 "w_mbytes_per_sec": 0 00:13:34.925 }, 00:13:34.925 "claimed": true, 00:13:34.925 "claim_type": "exclusive_write", 00:13:34.925 "zoned": false, 00:13:34.925 "supported_io_types": { 00:13:34.925 "read": true, 00:13:34.925 "write": true, 00:13:34.925 "unmap": true, 00:13:34.925 "flush": true, 00:13:34.925 "reset": true, 00:13:34.925 "nvme_admin": false, 00:13:34.925 "nvme_io": false, 00:13:34.925 "nvme_io_md": false, 00:13:34.925 "write_zeroes": true, 00:13:34.925 "zcopy": true, 00:13:34.925 "get_zone_info": false, 00:13:34.925 "zone_management": false, 00:13:34.925 "zone_append": false, 00:13:34.925 "compare": false, 00:13:34.925 "compare_and_write": false, 00:13:34.925 "abort": true, 00:13:34.925 "seek_hole": false, 00:13:34.925 "seek_data": false, 00:13:34.925 "copy": true, 00:13:34.925 "nvme_iov_md": false 00:13:34.926 }, 00:13:34.926 "memory_domains": [ 00:13:34.926 { 00:13:34.926 "dma_device_id": "system", 00:13:34.926 "dma_device_type": 1 00:13:34.926 }, 00:13:34.926 { 00:13:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.926 "dma_device_type": 2 00:13:34.926 } 00:13:34.926 ], 00:13:34.926 "driver_specific": {} 00:13:34.926 }' 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.926 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.185 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:35.185 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:35.185 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.185 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:35.185 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:35.185 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:35.444 [2024-07-26 10:23:48.233944] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.444 [2024-07-26 10:23:48.233966] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.444 [2024-07-26 10:23:48.234003] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.444 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.703 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.703 "name": "Existed_Raid", 00:13:35.703 "uuid": "d2ddce33-2ca2-4466-9ea9-025403a586bf", 00:13:35.703 "strip_size_kb": 64, 00:13:35.703 "state": "offline", 00:13:35.703 "raid_level": "raid0", 00:13:35.703 "superblock": true, 00:13:35.703 "num_base_bdevs": 2, 00:13:35.703 "num_base_bdevs_discovered": 1, 00:13:35.703 "num_base_bdevs_operational": 1, 00:13:35.703 "base_bdevs_list": [ 00:13:35.703 { 00:13:35.703 "name": null, 00:13:35.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.703 "is_configured": false, 00:13:35.703 "data_offset": 2048, 00:13:35.703 "data_size": 63488 00:13:35.703 }, 00:13:35.703 { 00:13:35.703 "name": "BaseBdev2", 00:13:35.703 "uuid": "4cd7e3bb-7a03-4ed9-a8c1-2a002a47a51f", 00:13:35.703 "is_configured": true, 00:13:35.703 "data_offset": 2048, 00:13:35.703 "data_size": 63488 00:13:35.703 } 00:13:35.703 ] 00:13:35.703 }' 00:13:35.703 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.703 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.272 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:36.272 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:36.272 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.272 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:36.531 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:36.531 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:36.531 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:36.790 [2024-07-26 10:23:49.458218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.790 [2024-07-26 10:23:49.458261] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c09190 name Existed_Raid, state offline 00:13:36.790 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:36.790 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:36.790 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.790 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3346081 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3346081 ']' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3346081 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3346081 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3346081' 00:13:37.049 killing process with pid 3346081 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3346081 00:13:37.049 [2024-07-26 10:23:49.757605] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3346081 00:13:37.049 [2024-07-26 10:23:49.758443] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:37.049 00:13:37.049 real 0m9.980s 00:13:37.049 user 0m17.739s 00:13:37.049 sys 0m1.864s 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.049 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.049 ************************************ 00:13:37.049 END TEST raid_state_function_test_sb 00:13:37.049 ************************************ 00:13:37.309 10:23:49 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:37.309 10:23:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:37.309 10:23:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.309 10:23:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.309 ************************************ 00:13:37.309 START TEST raid_superblock_test 00:13:37.309 ************************************ 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3347944 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3347944 /var/tmp/spdk-raid.sock 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:37.309 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3347944 ']' 00:13:37.310 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.310 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.310 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.310 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.310 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.310 [2024-07-26 10:23:50.071100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:37.310 [2024-07-26 10:23:50.071165] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347944 ] 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:37.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.310 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:37.310 [2024-07-26 10:23:50.197208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.570 [2024-07-26 10:23:50.244039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.570 [2024-07-26 10:23:50.301885] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.570 [2024-07-26 10:23:50.301924] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.136 10:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:38.393 malloc1 00:13:38.393 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.651 [2024-07-26 10:23:51.421550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.651 [2024-07-26 10:23:51.421594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.651 [2024-07-26 10:23:51.421614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2066270 00:13:38.651 [2024-07-26 10:23:51.421625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.651 [2024-07-26 10:23:51.423046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.651 [2024-07-26 10:23:51.423072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.651 pt1 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.651 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:38.909 malloc2 00:13:38.909 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.168 [2024-07-26 10:23:51.867163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.168 [2024-07-26 10:23:51.867206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.168 [2024-07-26 10:23:51.867222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20222f0 00:13:39.168 [2024-07-26 10:23:51.867234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.168 [2024-07-26 10:23:51.868724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.168 [2024-07-26 10:23:51.868750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.168 pt2 00:13:39.168 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:39.168 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:39.168 10:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:39.426 [2024-07-26 10:23:52.087769] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.426 [2024-07-26 10:23:52.089026] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.426 [2024-07-26 10:23:52.089136] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1febf20 00:13:39.426 [2024-07-26 10:23:52.089160] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.426 [2024-07-26 10:23:52.089345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ecb200 00:13:39.426 [2024-07-26 10:23:52.089465] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1febf20 00:13:39.426 [2024-07-26 10:23:52.089474] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1febf20 00:13:39.426 [2024-07-26 10:23:52.089580] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.426 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.685 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.685 "name": "raid_bdev1", 00:13:39.685 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:39.685 "strip_size_kb": 64, 00:13:39.685 "state": "online", 00:13:39.685 "raid_level": "raid0", 00:13:39.685 "superblock": true, 00:13:39.685 "num_base_bdevs": 2, 00:13:39.685 "num_base_bdevs_discovered": 2, 00:13:39.685 "num_base_bdevs_operational": 2, 00:13:39.685 "base_bdevs_list": [ 00:13:39.685 { 00:13:39.685 "name": "pt1", 00:13:39.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.685 "is_configured": true, 00:13:39.685 "data_offset": 2048, 00:13:39.685 "data_size": 63488 00:13:39.685 }, 00:13:39.685 { 00:13:39.685 "name": "pt2", 00:13:39.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.685 "is_configured": true, 00:13:39.685 "data_offset": 2048, 00:13:39.685 "data_size": 63488 00:13:39.685 } 00:13:39.685 ] 00:13:39.685 }' 00:13:39.685 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.685 10:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:40.251 10:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:40.251 [2024-07-26 10:23:53.106672] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.251 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:40.251 "name": "raid_bdev1", 00:13:40.251 "aliases": [ 00:13:40.251 "b8b2f7e5-1574-42ab-a3d7-3882d94138e3" 00:13:40.251 ], 00:13:40.251 "product_name": "Raid Volume", 00:13:40.251 "block_size": 512, 00:13:40.251 "num_blocks": 126976, 00:13:40.251 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:40.251 "assigned_rate_limits": { 00:13:40.251 "rw_ios_per_sec": 0, 00:13:40.251 "rw_mbytes_per_sec": 0, 00:13:40.251 "r_mbytes_per_sec": 0, 00:13:40.251 "w_mbytes_per_sec": 0 00:13:40.251 }, 00:13:40.251 "claimed": false, 00:13:40.251 "zoned": false, 00:13:40.251 "supported_io_types": { 00:13:40.251 "read": true, 00:13:40.251 "write": true, 00:13:40.251 "unmap": true, 00:13:40.251 "flush": true, 00:13:40.251 "reset": true, 00:13:40.251 "nvme_admin": false, 00:13:40.251 "nvme_io": false, 00:13:40.251 "nvme_io_md": false, 00:13:40.251 "write_zeroes": true, 00:13:40.251 "zcopy": false, 00:13:40.251 "get_zone_info": false, 00:13:40.251 "zone_management": false, 00:13:40.251 "zone_append": false, 00:13:40.251 "compare": false, 00:13:40.251 "compare_and_write": false, 00:13:40.251 "abort": false, 00:13:40.251 "seek_hole": false, 00:13:40.251 "seek_data": false, 00:13:40.251 "copy": false, 00:13:40.251 "nvme_iov_md": false 00:13:40.251 }, 00:13:40.251 "memory_domains": [ 00:13:40.251 { 00:13:40.251 "dma_device_id": "system", 00:13:40.251 "dma_device_type": 1 00:13:40.251 }, 00:13:40.251 { 00:13:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.251 "dma_device_type": 2 00:13:40.251 }, 00:13:40.251 { 00:13:40.251 "dma_device_id": "system", 00:13:40.251 "dma_device_type": 1 00:13:40.251 }, 00:13:40.251 { 00:13:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.251 "dma_device_type": 2 00:13:40.251 } 00:13:40.251 ], 00:13:40.251 "driver_specific": { 00:13:40.251 "raid": { 00:13:40.251 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:40.251 "strip_size_kb": 64, 00:13:40.251 "state": "online", 00:13:40.251 "raid_level": "raid0", 00:13:40.251 "superblock": true, 00:13:40.251 "num_base_bdevs": 2, 00:13:40.251 "num_base_bdevs_discovered": 2, 00:13:40.251 "num_base_bdevs_operational": 2, 00:13:40.251 "base_bdevs_list": [ 00:13:40.251 { 00:13:40.251 "name": "pt1", 00:13:40.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.251 "is_configured": true, 00:13:40.251 "data_offset": 2048, 00:13:40.251 "data_size": 63488 00:13:40.251 }, 00:13:40.251 { 00:13:40.251 "name": "pt2", 00:13:40.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.251 "is_configured": true, 00:13:40.251 "data_offset": 2048, 00:13:40.251 "data_size": 63488 00:13:40.251 } 00:13:40.251 ] 00:13:40.251 } 00:13:40.251 } 00:13:40.251 }' 00:13:40.251 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.509 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:40.509 pt2' 00:13:40.509 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.509 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:40.509 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.509 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.509 "name": "pt1", 00:13:40.509 "aliases": [ 00:13:40.509 "00000000-0000-0000-0000-000000000001" 00:13:40.509 ], 00:13:40.509 "product_name": "passthru", 00:13:40.509 "block_size": 512, 00:13:40.509 "num_blocks": 65536, 00:13:40.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.509 "assigned_rate_limits": { 00:13:40.509 "rw_ios_per_sec": 0, 00:13:40.509 "rw_mbytes_per_sec": 0, 00:13:40.509 "r_mbytes_per_sec": 0, 00:13:40.509 "w_mbytes_per_sec": 0 00:13:40.509 }, 00:13:40.509 "claimed": true, 00:13:40.510 "claim_type": "exclusive_write", 00:13:40.510 "zoned": false, 00:13:40.510 "supported_io_types": { 00:13:40.510 "read": true, 00:13:40.510 "write": true, 00:13:40.510 "unmap": true, 00:13:40.510 "flush": true, 00:13:40.510 "reset": true, 00:13:40.510 "nvme_admin": false, 00:13:40.510 "nvme_io": false, 00:13:40.510 "nvme_io_md": false, 00:13:40.510 "write_zeroes": true, 00:13:40.510 "zcopy": true, 00:13:40.510 "get_zone_info": false, 00:13:40.510 "zone_management": false, 00:13:40.510 "zone_append": false, 00:13:40.510 "compare": false, 00:13:40.510 "compare_and_write": false, 00:13:40.510 "abort": true, 00:13:40.510 "seek_hole": false, 00:13:40.510 "seek_data": false, 00:13:40.510 "copy": true, 00:13:40.510 "nvme_iov_md": false 00:13:40.510 }, 00:13:40.510 "memory_domains": [ 00:13:40.510 { 00:13:40.510 "dma_device_id": "system", 00:13:40.510 "dma_device_type": 1 00:13:40.510 }, 00:13:40.510 { 00:13:40.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.510 "dma_device_type": 2 00:13:40.510 } 00:13:40.510 ], 00:13:40.510 "driver_specific": { 00:13:40.510 "passthru": { 00:13:40.510 "name": "pt1", 00:13:40.510 "base_bdev_name": "malloc1" 00:13:40.510 } 00:13:40.510 } 00:13:40.510 }' 00:13:40.510 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.766 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.023 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.023 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.023 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.023 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:41.023 10:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.588 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.588 "name": "pt2", 00:13:41.588 "aliases": [ 00:13:41.588 "00000000-0000-0000-0000-000000000002" 00:13:41.588 ], 00:13:41.588 "product_name": "passthru", 00:13:41.588 "block_size": 512, 00:13:41.588 "num_blocks": 65536, 00:13:41.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.588 "assigned_rate_limits": { 00:13:41.588 "rw_ios_per_sec": 0, 00:13:41.588 "rw_mbytes_per_sec": 0, 00:13:41.588 "r_mbytes_per_sec": 0, 00:13:41.588 "w_mbytes_per_sec": 0 00:13:41.588 }, 00:13:41.588 "claimed": true, 00:13:41.588 "claim_type": "exclusive_write", 00:13:41.588 "zoned": false, 00:13:41.588 "supported_io_types": { 00:13:41.588 "read": true, 00:13:41.588 "write": true, 00:13:41.588 "unmap": true, 00:13:41.588 "flush": true, 00:13:41.588 "reset": true, 00:13:41.588 "nvme_admin": false, 00:13:41.588 "nvme_io": false, 00:13:41.588 "nvme_io_md": false, 00:13:41.588 "write_zeroes": true, 00:13:41.588 "zcopy": true, 00:13:41.588 "get_zone_info": false, 00:13:41.589 "zone_management": false, 00:13:41.589 "zone_append": false, 00:13:41.589 "compare": false, 00:13:41.589 "compare_and_write": false, 00:13:41.589 "abort": true, 00:13:41.589 "seek_hole": false, 00:13:41.589 "seek_data": false, 00:13:41.589 "copy": true, 00:13:41.589 "nvme_iov_md": false 00:13:41.589 }, 00:13:41.589 "memory_domains": [ 00:13:41.589 { 00:13:41.589 "dma_device_id": "system", 00:13:41.589 "dma_device_type": 1 00:13:41.589 }, 00:13:41.589 { 00:13:41.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.589 "dma_device_type": 2 00:13:41.589 } 00:13:41.589 ], 00:13:41.589 "driver_specific": { 00:13:41.589 "passthru": { 00:13:41.589 "name": "pt2", 00:13:41.589 "base_bdev_name": "malloc2" 00:13:41.589 } 00:13:41.589 } 00:13:41.589 }' 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.589 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:41.846 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:13:42.103 [2024-07-26 10:23:54.799127] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.103 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b8b2f7e5-1574-42ab-a3d7-3882d94138e3 00:13:42.103 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z b8b2f7e5-1574-42ab-a3d7-3882d94138e3 ']' 00:13:42.103 10:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:42.361 [2024-07-26 10:23:55.023475] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.361 [2024-07-26 10:23:55.023491] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.361 [2024-07-26 10:23:55.023538] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.361 [2024-07-26 10:23:55.023578] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.361 [2024-07-26 10:23:55.023589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1febf20 name raid_bdev1, state offline 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.361 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:42.618 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.618 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:42.877 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:42.877 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.135 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:43.136 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:43.136 10:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:43.394 [2024-07-26 10:23:56.074212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:43.394 [2024-07-26 10:23:56.075477] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:43.394 [2024-07-26 10:23:56.075527] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:43.394 [2024-07-26 10:23:56.075564] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:43.394 [2024-07-26 10:23:56.075581] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.394 [2024-07-26 10:23:56.075590] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2021a60 name raid_bdev1, state configuring 00:13:43.394 request: 00:13:43.394 { 00:13:43.394 "name": "raid_bdev1", 00:13:43.394 "raid_level": "raid0", 00:13:43.394 "base_bdevs": [ 00:13:43.394 "malloc1", 00:13:43.394 "malloc2" 00:13:43.394 ], 00:13:43.394 "strip_size_kb": 64, 00:13:43.394 "superblock": false, 00:13:43.394 "method": "bdev_raid_create", 00:13:43.394 "req_id": 1 00:13:43.394 } 00:13:43.394 Got JSON-RPC error response 00:13:43.394 response: 00:13:43.394 { 00:13:43.394 "code": -17, 00:13:43.394 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:43.394 } 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.394 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.960 [2024-07-26 10:23:56.747912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.960 [2024-07-26 10:23:56.747951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.960 [2024-07-26 10:23:56.747968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb5d90 00:13:43.960 [2024-07-26 10:23:56.747978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.960 [2024-07-26 10:23:56.749428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.960 [2024-07-26 10:23:56.749456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.960 [2024-07-26 10:23:56.749517] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.960 [2024-07-26 10:23:56.749538] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.960 pt1 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.960 10:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.526 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.526 "name": "raid_bdev1", 00:13:44.526 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:44.526 "strip_size_kb": 64, 00:13:44.526 "state": "configuring", 00:13:44.526 "raid_level": "raid0", 00:13:44.526 "superblock": true, 00:13:44.526 "num_base_bdevs": 2, 00:13:44.526 "num_base_bdevs_discovered": 1, 00:13:44.526 "num_base_bdevs_operational": 2, 00:13:44.526 "base_bdevs_list": [ 00:13:44.526 { 00:13:44.526 "name": "pt1", 00:13:44.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.526 "is_configured": true, 00:13:44.526 "data_offset": 2048, 00:13:44.526 "data_size": 63488 00:13:44.526 }, 00:13:44.526 { 00:13:44.526 "name": null, 00:13:44.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.526 "is_configured": false, 00:13:44.526 "data_offset": 2048, 00:13:44.526 "data_size": 63488 00:13:44.526 } 00:13:44.526 ] 00:13:44.526 }' 00:13:44.526 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.526 10:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:13:45.091 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:13:45.091 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:45.091 10:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:45.357 [2024-07-26 10:23:58.055433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:45.357 [2024-07-26 10:23:58.055483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.357 [2024-07-26 10:23:58.055501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb5100 00:13:45.357 [2024-07-26 10:23:58.055516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.357 [2024-07-26 10:23:58.055823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.357 [2024-07-26 10:23:58.055839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:45.357 [2024-07-26 10:23:58.055896] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:45.357 [2024-07-26 10:23:58.055913] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.357 [2024-07-26 10:23:58.056000] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fef510 00:13:45.357 [2024-07-26 10:23:58.056010] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:45.357 [2024-07-26 10:23:58.056170] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ecb7f0 00:13:45.357 [2024-07-26 10:23:58.056281] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fef510 00:13:45.357 [2024-07-26 10:23:58.056290] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fef510 00:13:45.357 [2024-07-26 10:23:58.056380] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.357 pt2 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.357 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.358 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.637 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.637 "name": "raid_bdev1", 00:13:45.637 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:45.637 "strip_size_kb": 64, 00:13:45.637 "state": "online", 00:13:45.637 "raid_level": "raid0", 00:13:45.637 "superblock": true, 00:13:45.637 "num_base_bdevs": 2, 00:13:45.637 "num_base_bdevs_discovered": 2, 00:13:45.637 "num_base_bdevs_operational": 2, 00:13:45.637 "base_bdevs_list": [ 00:13:45.637 { 00:13:45.637 "name": "pt1", 00:13:45.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.637 "is_configured": true, 00:13:45.637 "data_offset": 2048, 00:13:45.637 "data_size": 63488 00:13:45.637 }, 00:13:45.637 { 00:13:45.637 "name": "pt2", 00:13:45.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.637 "is_configured": true, 00:13:45.637 "data_offset": 2048, 00:13:45.637 "data_size": 63488 00:13:45.637 } 00:13:45.637 ] 00:13:45.637 }' 00:13:45.637 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.637 10:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:46.202 10:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:46.202 [2024-07-26 10:23:59.086378] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.202 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:46.202 "name": "raid_bdev1", 00:13:46.202 "aliases": [ 00:13:46.202 "b8b2f7e5-1574-42ab-a3d7-3882d94138e3" 00:13:46.202 ], 00:13:46.202 "product_name": "Raid Volume", 00:13:46.202 "block_size": 512, 00:13:46.202 "num_blocks": 126976, 00:13:46.202 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:46.202 "assigned_rate_limits": { 00:13:46.202 "rw_ios_per_sec": 0, 00:13:46.202 "rw_mbytes_per_sec": 0, 00:13:46.202 "r_mbytes_per_sec": 0, 00:13:46.202 "w_mbytes_per_sec": 0 00:13:46.202 }, 00:13:46.202 "claimed": false, 00:13:46.202 "zoned": false, 00:13:46.202 "supported_io_types": { 00:13:46.202 "read": true, 00:13:46.202 "write": true, 00:13:46.202 "unmap": true, 00:13:46.202 "flush": true, 00:13:46.202 "reset": true, 00:13:46.202 "nvme_admin": false, 00:13:46.202 "nvme_io": false, 00:13:46.202 "nvme_io_md": false, 00:13:46.202 "write_zeroes": true, 00:13:46.202 "zcopy": false, 00:13:46.202 "get_zone_info": false, 00:13:46.202 "zone_management": false, 00:13:46.202 "zone_append": false, 00:13:46.202 "compare": false, 00:13:46.202 "compare_and_write": false, 00:13:46.202 "abort": false, 00:13:46.202 "seek_hole": false, 00:13:46.202 "seek_data": false, 00:13:46.202 "copy": false, 00:13:46.202 "nvme_iov_md": false 00:13:46.202 }, 00:13:46.202 "memory_domains": [ 00:13:46.202 { 00:13:46.202 "dma_device_id": "system", 00:13:46.202 "dma_device_type": 1 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.202 "dma_device_type": 2 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "dma_device_id": "system", 00:13:46.202 "dma_device_type": 1 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.202 "dma_device_type": 2 00:13:46.202 } 00:13:46.202 ], 00:13:46.202 "driver_specific": { 00:13:46.202 "raid": { 00:13:46.202 "uuid": "b8b2f7e5-1574-42ab-a3d7-3882d94138e3", 00:13:46.202 "strip_size_kb": 64, 00:13:46.202 "state": "online", 00:13:46.202 "raid_level": "raid0", 00:13:46.202 "superblock": true, 00:13:46.202 "num_base_bdevs": 2, 00:13:46.202 "num_base_bdevs_discovered": 2, 00:13:46.202 "num_base_bdevs_operational": 2, 00:13:46.202 "base_bdevs_list": [ 00:13:46.202 { 00:13:46.202 "name": "pt1", 00:13:46.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.202 "is_configured": true, 00:13:46.202 "data_offset": 2048, 00:13:46.202 "data_size": 63488 00:13:46.202 }, 00:13:46.202 { 00:13:46.202 "name": "pt2", 00:13:46.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.202 "is_configured": true, 00:13:46.202 "data_offset": 2048, 00:13:46.202 "data_size": 63488 00:13:46.202 } 00:13:46.202 ] 00:13:46.202 } 00:13:46.202 } 00:13:46.202 }' 00:13:46.460 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.460 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:46.460 pt2' 00:13:46.460 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.460 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:46.460 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:46.718 "name": "pt1", 00:13:46.718 "aliases": [ 00:13:46.718 "00000000-0000-0000-0000-000000000001" 00:13:46.718 ], 00:13:46.718 "product_name": "passthru", 00:13:46.718 "block_size": 512, 00:13:46.718 "num_blocks": 65536, 00:13:46.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.718 "assigned_rate_limits": { 00:13:46.718 "rw_ios_per_sec": 0, 00:13:46.718 "rw_mbytes_per_sec": 0, 00:13:46.718 "r_mbytes_per_sec": 0, 00:13:46.718 "w_mbytes_per_sec": 0 00:13:46.718 }, 00:13:46.718 "claimed": true, 00:13:46.718 "claim_type": "exclusive_write", 00:13:46.718 "zoned": false, 00:13:46.718 "supported_io_types": { 00:13:46.718 "read": true, 00:13:46.718 "write": true, 00:13:46.718 "unmap": true, 00:13:46.718 "flush": true, 00:13:46.718 "reset": true, 00:13:46.718 "nvme_admin": false, 00:13:46.718 "nvme_io": false, 00:13:46.718 "nvme_io_md": false, 00:13:46.718 "write_zeroes": true, 00:13:46.718 "zcopy": true, 00:13:46.718 "get_zone_info": false, 00:13:46.718 "zone_management": false, 00:13:46.718 "zone_append": false, 00:13:46.718 "compare": false, 00:13:46.718 "compare_and_write": false, 00:13:46.718 "abort": true, 00:13:46.718 "seek_hole": false, 00:13:46.718 "seek_data": false, 00:13:46.718 "copy": true, 00:13:46.718 "nvme_iov_md": false 00:13:46.718 }, 00:13:46.718 "memory_domains": [ 00:13:46.718 { 00:13:46.718 "dma_device_id": "system", 00:13:46.718 "dma_device_type": 1 00:13:46.718 }, 00:13:46.718 { 00:13:46.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.718 "dma_device_type": 2 00:13:46.718 } 00:13:46.718 ], 00:13:46.718 "driver_specific": { 00:13:46.718 "passthru": { 00:13:46.718 "name": "pt1", 00:13:46.718 "base_bdev_name": "malloc1" 00:13:46.718 } 00:13:46.718 } 00:13:46.718 }' 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.718 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:46.975 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:47.233 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:47.233 "name": "pt2", 00:13:47.233 "aliases": [ 00:13:47.233 "00000000-0000-0000-0000-000000000002" 00:13:47.233 ], 00:13:47.233 "product_name": "passthru", 00:13:47.233 "block_size": 512, 00:13:47.233 "num_blocks": 65536, 00:13:47.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.233 "assigned_rate_limits": { 00:13:47.233 "rw_ios_per_sec": 0, 00:13:47.233 "rw_mbytes_per_sec": 0, 00:13:47.233 "r_mbytes_per_sec": 0, 00:13:47.233 "w_mbytes_per_sec": 0 00:13:47.233 }, 00:13:47.233 "claimed": true, 00:13:47.233 "claim_type": "exclusive_write", 00:13:47.233 "zoned": false, 00:13:47.233 "supported_io_types": { 00:13:47.233 "read": true, 00:13:47.233 "write": true, 00:13:47.233 "unmap": true, 00:13:47.233 "flush": true, 00:13:47.233 "reset": true, 00:13:47.233 "nvme_admin": false, 00:13:47.233 "nvme_io": false, 00:13:47.233 "nvme_io_md": false, 00:13:47.233 "write_zeroes": true, 00:13:47.233 "zcopy": true, 00:13:47.233 "get_zone_info": false, 00:13:47.233 "zone_management": false, 00:13:47.233 "zone_append": false, 00:13:47.233 "compare": false, 00:13:47.233 "compare_and_write": false, 00:13:47.233 "abort": true, 00:13:47.233 "seek_hole": false, 00:13:47.233 "seek_data": false, 00:13:47.233 "copy": true, 00:13:47.233 "nvme_iov_md": false 00:13:47.233 }, 00:13:47.233 "memory_domains": [ 00:13:47.233 { 00:13:47.233 "dma_device_id": "system", 00:13:47.233 "dma_device_type": 1 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.233 "dma_device_type": 2 00:13:47.233 } 00:13:47.233 ], 00:13:47.233 "driver_specific": { 00:13:47.233 "passthru": { 00:13:47.233 "name": "pt2", 00:13:47.233 "base_bdev_name": "malloc2" 00:13:47.233 } 00:13:47.233 } 00:13:47.233 }' 00:13:47.233 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.233 10:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.233 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:47.233 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.233 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.233 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:47.233 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:47.491 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:13:47.749 [2024-07-26 10:24:00.498101] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' b8b2f7e5-1574-42ab-a3d7-3882d94138e3 '!=' b8b2f7e5-1574-42ab-a3d7-3882d94138e3 ']' 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3347944 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3347944 ']' 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3347944 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3347944 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3347944' 00:13:47.749 killing process with pid 3347944 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3347944 00:13:47.749 [2024-07-26 10:24:00.577816] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.749 [2024-07-26 10:24:00.577867] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.749 [2024-07-26 10:24:00.577907] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.749 [2024-07-26 10:24:00.577917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fef510 name raid_bdev1, state offline 00:13:47.749 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3347944 00:13:47.749 [2024-07-26 10:24:00.593714] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.008 10:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:13:48.008 00:13:48.008 real 0m10.759s 00:13:48.008 user 0m19.338s 00:13:48.008 sys 0m1.933s 00:13:48.008 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.008 10:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.008 ************************************ 00:13:48.008 END TEST raid_superblock_test 00:13:48.008 ************************************ 00:13:48.008 10:24:00 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:13:48.008 10:24:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:48.008 10:24:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.008 10:24:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.008 ************************************ 00:13:48.008 START TEST raid_read_error_test 00:13:48.008 ************************************ 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.8CbgcEZBL1 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3350069 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3350069 /var/tmp/spdk-raid.sock 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3350069 ']' 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:48.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.008 10:24:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.267 [2024-07-26 10:24:00.913164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:48.267 [2024-07-26 10:24:00.913221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350069 ] 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:48.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:48.267 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:48.267 [2024-07-26 10:24:01.049639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.267 [2024-07-26 10:24:01.093223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.267 [2024-07-26 10:24:01.157684] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.267 [2024-07-26 10:24:01.157726] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.200 10:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.200 10:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:49.200 10:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:49.200 10:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.200 BaseBdev1_malloc 00:13:49.200 10:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:49.458 true 00:13:49.458 10:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:49.716 [2024-07-26 10:24:02.433230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:49.716 [2024-07-26 10:24:02.433276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.716 [2024-07-26 10:24:02.433294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27037c0 00:13:49.716 [2024-07-26 10:24:02.433305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.716 [2024-07-26 10:24:02.434871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.716 [2024-07-26 10:24:02.434899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.716 BaseBdev1 00:13:49.716 10:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:49.716 10:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.974 BaseBdev2_malloc 00:13:49.974 10:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:50.232 true 00:13:50.232 10:24:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:50.232 [2024-07-26 10:24:03.103218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:50.232 [2024-07-26 10:24:03.103263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.232 [2024-07-26 10:24:03.103282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26aa960 00:13:50.232 [2024-07-26 10:24:03.103294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.232 [2024-07-26 10:24:03.104715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.232 [2024-07-26 10:24:03.104744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.232 BaseBdev2 00:13:50.232 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:50.491 [2024-07-26 10:24:03.331844] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.491 [2024-07-26 10:24:03.333012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.491 [2024-07-26 10:24:03.333170] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2551860 00:13:50.491 [2024-07-26 10:24:03.333182] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:50.491 [2024-07-26 10:24:03.333365] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26aa280 00:13:50.491 [2024-07-26 10:24:03.333491] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2551860 00:13:50.491 [2024-07-26 10:24:03.333500] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2551860 00:13:50.491 [2024-07-26 10:24:03.333610] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.491 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.749 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.749 "name": "raid_bdev1", 00:13:50.749 "uuid": "829f20d3-fbe3-45d3-b875-be28069d3080", 00:13:50.749 "strip_size_kb": 64, 00:13:50.749 "state": "online", 00:13:50.749 "raid_level": "raid0", 00:13:50.749 "superblock": true, 00:13:50.749 "num_base_bdevs": 2, 00:13:50.749 "num_base_bdevs_discovered": 2, 00:13:50.749 "num_base_bdevs_operational": 2, 00:13:50.749 "base_bdevs_list": [ 00:13:50.749 { 00:13:50.749 "name": "BaseBdev1", 00:13:50.749 "uuid": "16eaf746-c299-5e3b-bb95-7364e9e5386d", 00:13:50.749 "is_configured": true, 00:13:50.749 "data_offset": 2048, 00:13:50.749 "data_size": 63488 00:13:50.749 }, 00:13:50.749 { 00:13:50.749 "name": "BaseBdev2", 00:13:50.749 "uuid": "c51f0915-8f76-5e4c-9ee4-9e0543bf9142", 00:13:50.749 "is_configured": true, 00:13:50.749 "data_offset": 2048, 00:13:50.749 "data_size": 63488 00:13:50.749 } 00:13:50.749 ] 00:13:50.749 }' 00:13:50.749 10:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.749 10:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.312 10:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:51.312 10:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:51.570 [2024-07-26 10:24:04.250489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26acd10 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.504 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.762 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.762 "name": "raid_bdev1", 00:13:52.762 "uuid": "829f20d3-fbe3-45d3-b875-be28069d3080", 00:13:52.762 "strip_size_kb": 64, 00:13:52.762 "state": "online", 00:13:52.762 "raid_level": "raid0", 00:13:52.762 "superblock": true, 00:13:52.762 "num_base_bdevs": 2, 00:13:52.762 "num_base_bdevs_discovered": 2, 00:13:52.762 "num_base_bdevs_operational": 2, 00:13:52.762 "base_bdevs_list": [ 00:13:52.762 { 00:13:52.762 "name": "BaseBdev1", 00:13:52.762 "uuid": "16eaf746-c299-5e3b-bb95-7364e9e5386d", 00:13:52.762 "is_configured": true, 00:13:52.762 "data_offset": 2048, 00:13:52.762 "data_size": 63488 00:13:52.762 }, 00:13:52.762 { 00:13:52.762 "name": "BaseBdev2", 00:13:52.762 "uuid": "c51f0915-8f76-5e4c-9ee4-9e0543bf9142", 00:13:52.762 "is_configured": true, 00:13:52.762 "data_offset": 2048, 00:13:52.762 "data_size": 63488 00:13:52.762 } 00:13:52.762 ] 00:13:52.762 }' 00:13:52.762 10:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.762 10:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.329 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:53.588 [2024-07-26 10:24:06.433565] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.588 [2024-07-26 10:24:06.433599] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.588 [2024-07-26 10:24:06.436551] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.588 [2024-07-26 10:24:06.436579] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.588 [2024-07-26 10:24:06.436604] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.588 [2024-07-26 10:24:06.436613] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2551860 name raid_bdev1, state offline 00:13:53.588 0 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3350069 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3350069 ']' 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3350069 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.588 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3350069 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3350069' 00:13:53.847 killing process with pid 3350069 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3350069 00:13:53.847 [2024-07-26 10:24:06.512598] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3350069 00:13:53.847 [2024-07-26 10:24:06.522498] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.8CbgcEZBL1 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:13:53.847 00:13:53.847 real 0m5.863s 00:13:53.847 user 0m9.087s 00:13:53.847 sys 0m1.054s 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.847 10:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.847 ************************************ 00:13:53.847 END TEST raid_read_error_test 00:13:53.847 ************************************ 00:13:54.106 10:24:06 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:13:54.106 10:24:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:54.106 10:24:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.106 10:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.106 ************************************ 00:13:54.106 START TEST raid_write_error_test 00:13:54.106 ************************************ 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.HuG1lsACOo 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3351707 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3351707 /var/tmp/spdk-raid.sock 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3351707 ']' 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:54.106 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.107 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:54.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:54.107 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.107 10:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.107 [2024-07-26 10:24:06.873324] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:13:54.107 [2024-07-26 10:24:06.873385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351707 ] 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:54.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:54.107 [2024-07-26 10:24:07.007025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.366 [2024-07-26 10:24:07.049798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.366 [2024-07-26 10:24:07.108453] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.366 [2024-07-26 10:24:07.108490] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.932 10:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.932 10:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:54.932 10:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:54.932 10:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.191 BaseBdev1_malloc 00:13:55.191 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:55.449 true 00:13:55.449 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:55.707 [2024-07-26 10:24:08.448182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:55.707 [2024-07-26 10:24:08.448224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.707 [2024-07-26 10:24:08.448241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a77c0 00:13:55.707 [2024-07-26 10:24:08.448253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.707 [2024-07-26 10:24:08.449735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.707 [2024-07-26 10:24:08.449762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.707 BaseBdev1 00:13:55.707 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:55.707 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.965 BaseBdev2_malloc 00:13:55.965 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:56.223 true 00:13:56.223 10:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:56.482 [2024-07-26 10:24:09.138111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:56.482 [2024-07-26 10:24:09.138154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.482 [2024-07-26 10:24:09.138172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264e960 00:13:56.482 [2024-07-26 10:24:09.138184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.482 [2024-07-26 10:24:09.139425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.482 [2024-07-26 10:24:09.139451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:56.482 BaseBdev2 00:13:56.482 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:56.482 [2024-07-26 10:24:09.366732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.482 [2024-07-26 10:24:09.367828] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.482 [2024-07-26 10:24:09.367974] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x24f5860 00:13:56.482 [2024-07-26 10:24:09.367986] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:56.482 [2024-07-26 10:24:09.368165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264e280 00:13:56.482 [2024-07-26 10:24:09.368287] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24f5860 00:13:56.482 [2024-07-26 10:24:09.368296] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x24f5860 00:13:56.482 [2024-07-26 10:24:09.368399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.742 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.742 "name": "raid_bdev1", 00:13:56.742 "uuid": "d84e6e42-cffe-4d89-8b4c-8f5a74535d8c", 00:13:56.742 "strip_size_kb": 64, 00:13:56.742 "state": "online", 00:13:56.742 "raid_level": "raid0", 00:13:56.742 "superblock": true, 00:13:56.742 "num_base_bdevs": 2, 00:13:56.742 "num_base_bdevs_discovered": 2, 00:13:56.742 "num_base_bdevs_operational": 2, 00:13:56.742 "base_bdevs_list": [ 00:13:56.742 { 00:13:56.742 "name": "BaseBdev1", 00:13:56.742 "uuid": "5e063dd6-b869-5d26-9391-c0c00dd7abf9", 00:13:56.742 "is_configured": true, 00:13:56.742 "data_offset": 2048, 00:13:56.742 "data_size": 63488 00:13:56.742 }, 00:13:56.742 { 00:13:56.742 "name": "BaseBdev2", 00:13:56.743 "uuid": "8f113dac-6261-5882-9f4e-515f045039d2", 00:13:56.743 "is_configured": true, 00:13:56.743 "data_offset": 2048, 00:13:56.743 "data_size": 63488 00:13:56.743 } 00:13:56.743 ] 00:13:56.743 }' 00:13:56.743 10:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.743 10:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.311 10:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:57.311 10:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:57.570 [2024-07-26 10:24:10.301453] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2650d10 00:13:58.510 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.819 "name": "raid_bdev1", 00:13:58.819 "uuid": "d84e6e42-cffe-4d89-8b4c-8f5a74535d8c", 00:13:58.819 "strip_size_kb": 64, 00:13:58.819 "state": "online", 00:13:58.819 "raid_level": "raid0", 00:13:58.819 "superblock": true, 00:13:58.819 "num_base_bdevs": 2, 00:13:58.819 "num_base_bdevs_discovered": 2, 00:13:58.819 "num_base_bdevs_operational": 2, 00:13:58.819 "base_bdevs_list": [ 00:13:58.819 { 00:13:58.819 "name": "BaseBdev1", 00:13:58.819 "uuid": "5e063dd6-b869-5d26-9391-c0c00dd7abf9", 00:13:58.819 "is_configured": true, 00:13:58.819 "data_offset": 2048, 00:13:58.819 "data_size": 63488 00:13:58.819 }, 00:13:58.819 { 00:13:58.819 "name": "BaseBdev2", 00:13:58.819 "uuid": "8f113dac-6261-5882-9f4e-515f045039d2", 00:13:58.819 "is_configured": true, 00:13:58.819 "data_offset": 2048, 00:13:58.819 "data_size": 63488 00:13:58.819 } 00:13:58.819 ] 00:13:58.819 }' 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.819 10:24:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.385 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:59.643 [2024-07-26 10:24:12.439380] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.643 [2024-07-26 10:24:12.439423] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.643 [2024-07-26 10:24:12.442357] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.643 [2024-07-26 10:24:12.442386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.643 [2024-07-26 10:24:12.442410] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.643 [2024-07-26 10:24:12.442421] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24f5860 name raid_bdev1, state offline 00:13:59.643 0 00:13:59.643 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3351707 00:13:59.643 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3351707 ']' 00:13:59.643 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3351707 00:13:59.643 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3351707 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3351707' 00:13:59.644 killing process with pid 3351707 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3351707 00:13:59.644 [2024-07-26 10:24:12.518622] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.644 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3351707 00:13:59.644 [2024-07-26 10:24:12.527971] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.HuG1lsACOo 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:13:59.902 00:13:59.902 real 0m5.921s 00:13:59.902 user 0m9.130s 00:13:59.902 sys 0m1.120s 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.902 10:24:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.902 ************************************ 00:13:59.902 END TEST raid_write_error_test 00:13:59.902 ************************************ 00:13:59.902 10:24:12 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:59.902 10:24:12 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:59.902 10:24:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:59.902 10:24:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.902 10:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.902 ************************************ 00:13:59.902 START TEST raid_state_function_test 00:13:59.902 ************************************ 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:59.902 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3352859 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3352859' 00:14:00.162 Process raid pid: 3352859 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3352859 /var/tmp/spdk-raid.sock 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3352859 ']' 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:00.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:00.162 10:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.162 [2024-07-26 10:24:12.861899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:00.162 [2024-07-26 10:24:12.861954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:00.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.162 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:00.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:00.163 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:00.163 [2024-07-26 10:24:12.995115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.163 [2024-07-26 10:24:13.039308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.422 [2024-07-26 10:24:13.099598] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.422 [2024-07-26 10:24:13.099632] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:00.990 [2024-07-26 10:24:13.827416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.990 [2024-07-26 10:24:13.827455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.990 [2024-07-26 10:24:13.827465] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.990 [2024-07-26 10:24:13.827476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.990 10:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.250 10:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.250 "name": "Existed_Raid", 00:14:01.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.250 "strip_size_kb": 64, 00:14:01.250 "state": "configuring", 00:14:01.250 "raid_level": "concat", 00:14:01.250 "superblock": false, 00:14:01.250 "num_base_bdevs": 2, 00:14:01.250 "num_base_bdevs_discovered": 0, 00:14:01.250 "num_base_bdevs_operational": 2, 00:14:01.250 "base_bdevs_list": [ 00:14:01.250 { 00:14:01.250 "name": "BaseBdev1", 00:14:01.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.250 "is_configured": false, 00:14:01.250 "data_offset": 0, 00:14:01.250 "data_size": 0 00:14:01.250 }, 00:14:01.250 { 00:14:01.250 "name": "BaseBdev2", 00:14:01.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.250 "is_configured": false, 00:14:01.250 "data_offset": 0, 00:14:01.250 "data_size": 0 00:14:01.250 } 00:14:01.250 ] 00:14:01.250 }' 00:14:01.250 10:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.250 10:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.818 10:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:02.077 [2024-07-26 10:24:14.837970] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.077 [2024-07-26 10:24:14.838000] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fe3d00 name Existed_Raid, state configuring 00:14:02.077 10:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:02.336 [2024-07-26 10:24:15.062560] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.336 [2024-07-26 10:24:15.062586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.336 [2024-07-26 10:24:15.062595] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.336 [2024-07-26 10:24:15.062606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.336 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.597 [2024-07-26 10:24:15.296592] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.597 BaseBdev1 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.597 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.856 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.856 [ 00:14:02.856 { 00:14:02.856 "name": "BaseBdev1", 00:14:02.856 "aliases": [ 00:14:02.856 "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4" 00:14:02.856 ], 00:14:02.856 "product_name": "Malloc disk", 00:14:02.856 "block_size": 512, 00:14:02.856 "num_blocks": 65536, 00:14:02.856 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:02.856 "assigned_rate_limits": { 00:14:02.856 "rw_ios_per_sec": 0, 00:14:02.856 "rw_mbytes_per_sec": 0, 00:14:02.856 "r_mbytes_per_sec": 0, 00:14:02.856 "w_mbytes_per_sec": 0 00:14:02.856 }, 00:14:02.856 "claimed": true, 00:14:02.856 "claim_type": "exclusive_write", 00:14:02.856 "zoned": false, 00:14:02.856 "supported_io_types": { 00:14:02.856 "read": true, 00:14:02.856 "write": true, 00:14:02.856 "unmap": true, 00:14:02.856 "flush": true, 00:14:02.856 "reset": true, 00:14:02.856 "nvme_admin": false, 00:14:02.856 "nvme_io": false, 00:14:02.856 "nvme_io_md": false, 00:14:02.856 "write_zeroes": true, 00:14:02.856 "zcopy": true, 00:14:02.856 "get_zone_info": false, 00:14:02.856 "zone_management": false, 00:14:02.856 "zone_append": false, 00:14:02.856 "compare": false, 00:14:02.856 "compare_and_write": false, 00:14:02.856 "abort": true, 00:14:02.856 "seek_hole": false, 00:14:02.856 "seek_data": false, 00:14:02.856 "copy": true, 00:14:02.856 "nvme_iov_md": false 00:14:02.856 }, 00:14:02.856 "memory_domains": [ 00:14:02.856 { 00:14:02.856 "dma_device_id": "system", 00:14:02.856 "dma_device_type": 1 00:14:02.856 }, 00:14:02.856 { 00:14:02.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.856 "dma_device_type": 2 00:14:02.856 } 00:14:02.856 ], 00:14:02.856 "driver_specific": {} 00:14:02.856 } 00:14:02.856 ] 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.115 10:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.115 10:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.115 "name": "Existed_Raid", 00:14:03.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.115 "strip_size_kb": 64, 00:14:03.115 "state": "configuring", 00:14:03.115 "raid_level": "concat", 00:14:03.115 "superblock": false, 00:14:03.115 "num_base_bdevs": 2, 00:14:03.115 "num_base_bdevs_discovered": 1, 00:14:03.115 "num_base_bdevs_operational": 2, 00:14:03.115 "base_bdevs_list": [ 00:14:03.115 { 00:14:03.115 "name": "BaseBdev1", 00:14:03.115 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:03.115 "is_configured": true, 00:14:03.115 "data_offset": 0, 00:14:03.115 "data_size": 65536 00:14:03.115 }, 00:14:03.115 { 00:14:03.115 "name": "BaseBdev2", 00:14:03.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.115 "is_configured": false, 00:14:03.115 "data_offset": 0, 00:14:03.115 "data_size": 0 00:14:03.115 } 00:14:03.115 ] 00:14:03.115 }' 00:14:03.115 10:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.115 10:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.681 10:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:03.939 [2024-07-26 10:24:16.780504] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.939 [2024-07-26 10:24:16.780541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fe3630 name Existed_Raid, state configuring 00:14:03.939 10:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:04.198 [2024-07-26 10:24:17.005127] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.198 [2024-07-26 10:24:17.006469] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.198 [2024-07-26 10:24:17.006500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.198 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.457 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.457 "name": "Existed_Raid", 00:14:04.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.457 "strip_size_kb": 64, 00:14:04.457 "state": "configuring", 00:14:04.457 "raid_level": "concat", 00:14:04.457 "superblock": false, 00:14:04.457 "num_base_bdevs": 2, 00:14:04.457 "num_base_bdevs_discovered": 1, 00:14:04.457 "num_base_bdevs_operational": 2, 00:14:04.457 "base_bdevs_list": [ 00:14:04.457 { 00:14:04.457 "name": "BaseBdev1", 00:14:04.457 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:04.457 "is_configured": true, 00:14:04.457 "data_offset": 0, 00:14:04.457 "data_size": 65536 00:14:04.457 }, 00:14:04.457 { 00:14:04.457 "name": "BaseBdev2", 00:14:04.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.457 "is_configured": false, 00:14:04.457 "data_offset": 0, 00:14:04.457 "data_size": 0 00:14:04.457 } 00:14:04.457 ] 00:14:04.457 }' 00:14:04.457 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.457 10:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.024 10:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.282 [2024-07-26 10:24:18.030997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.282 [2024-07-26 10:24:18.031027] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2196190 00:14:05.282 [2024-07-26 10:24:18.031035] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:05.282 [2024-07-26 10:24:18.031266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fe6790 00:14:05.282 [2024-07-26 10:24:18.031371] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2196190 00:14:05.282 [2024-07-26 10:24:18.031380] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2196190 00:14:05.282 [2024-07-26 10:24:18.031527] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.282 BaseBdev2 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:05.282 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.540 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.798 [ 00:14:05.798 { 00:14:05.798 "name": "BaseBdev2", 00:14:05.798 "aliases": [ 00:14:05.798 "f66b35d8-3768-47de-b769-56ba326826f6" 00:14:05.798 ], 00:14:05.798 "product_name": "Malloc disk", 00:14:05.798 "block_size": 512, 00:14:05.798 "num_blocks": 65536, 00:14:05.798 "uuid": "f66b35d8-3768-47de-b769-56ba326826f6", 00:14:05.798 "assigned_rate_limits": { 00:14:05.798 "rw_ios_per_sec": 0, 00:14:05.798 "rw_mbytes_per_sec": 0, 00:14:05.798 "r_mbytes_per_sec": 0, 00:14:05.798 "w_mbytes_per_sec": 0 00:14:05.798 }, 00:14:05.798 "claimed": true, 00:14:05.798 "claim_type": "exclusive_write", 00:14:05.798 "zoned": false, 00:14:05.798 "supported_io_types": { 00:14:05.798 "read": true, 00:14:05.798 "write": true, 00:14:05.798 "unmap": true, 00:14:05.798 "flush": true, 00:14:05.798 "reset": true, 00:14:05.798 "nvme_admin": false, 00:14:05.798 "nvme_io": false, 00:14:05.798 "nvme_io_md": false, 00:14:05.798 "write_zeroes": true, 00:14:05.798 "zcopy": true, 00:14:05.798 "get_zone_info": false, 00:14:05.798 "zone_management": false, 00:14:05.798 "zone_append": false, 00:14:05.798 "compare": false, 00:14:05.798 "compare_and_write": false, 00:14:05.798 "abort": true, 00:14:05.798 "seek_hole": false, 00:14:05.798 "seek_data": false, 00:14:05.798 "copy": true, 00:14:05.798 "nvme_iov_md": false 00:14:05.798 }, 00:14:05.798 "memory_domains": [ 00:14:05.798 { 00:14:05.798 "dma_device_id": "system", 00:14:05.798 "dma_device_type": 1 00:14:05.798 }, 00:14:05.798 { 00:14:05.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.798 "dma_device_type": 2 00:14:05.798 } 00:14:05.798 ], 00:14:05.799 "driver_specific": {} 00:14:05.799 } 00:14:05.799 ] 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.799 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.057 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.057 "name": "Existed_Raid", 00:14:06.057 "uuid": "b50d8051-74a1-4a26-84aa-bd175dd4ee98", 00:14:06.057 "strip_size_kb": 64, 00:14:06.057 "state": "online", 00:14:06.057 "raid_level": "concat", 00:14:06.057 "superblock": false, 00:14:06.057 "num_base_bdevs": 2, 00:14:06.057 "num_base_bdevs_discovered": 2, 00:14:06.057 "num_base_bdevs_operational": 2, 00:14:06.057 "base_bdevs_list": [ 00:14:06.057 { 00:14:06.057 "name": "BaseBdev1", 00:14:06.057 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:06.057 "is_configured": true, 00:14:06.057 "data_offset": 0, 00:14:06.057 "data_size": 65536 00:14:06.057 }, 00:14:06.057 { 00:14:06.057 "name": "BaseBdev2", 00:14:06.057 "uuid": "f66b35d8-3768-47de-b769-56ba326826f6", 00:14:06.057 "is_configured": true, 00:14:06.057 "data_offset": 0, 00:14:06.057 "data_size": 65536 00:14:06.057 } 00:14:06.057 ] 00:14:06.057 }' 00:14:06.057 10:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.057 10:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:06.624 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:06.624 [2024-07-26 10:24:19.507152] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.883 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:06.883 "name": "Existed_Raid", 00:14:06.883 "aliases": [ 00:14:06.883 "b50d8051-74a1-4a26-84aa-bd175dd4ee98" 00:14:06.883 ], 00:14:06.883 "product_name": "Raid Volume", 00:14:06.883 "block_size": 512, 00:14:06.883 "num_blocks": 131072, 00:14:06.883 "uuid": "b50d8051-74a1-4a26-84aa-bd175dd4ee98", 00:14:06.883 "assigned_rate_limits": { 00:14:06.883 "rw_ios_per_sec": 0, 00:14:06.883 "rw_mbytes_per_sec": 0, 00:14:06.883 "r_mbytes_per_sec": 0, 00:14:06.883 "w_mbytes_per_sec": 0 00:14:06.883 }, 00:14:06.883 "claimed": false, 00:14:06.883 "zoned": false, 00:14:06.883 "supported_io_types": { 00:14:06.883 "read": true, 00:14:06.883 "write": true, 00:14:06.883 "unmap": true, 00:14:06.883 "flush": true, 00:14:06.883 "reset": true, 00:14:06.883 "nvme_admin": false, 00:14:06.883 "nvme_io": false, 00:14:06.883 "nvme_io_md": false, 00:14:06.883 "write_zeroes": true, 00:14:06.883 "zcopy": false, 00:14:06.883 "get_zone_info": false, 00:14:06.883 "zone_management": false, 00:14:06.883 "zone_append": false, 00:14:06.883 "compare": false, 00:14:06.883 "compare_and_write": false, 00:14:06.883 "abort": false, 00:14:06.883 "seek_hole": false, 00:14:06.884 "seek_data": false, 00:14:06.884 "copy": false, 00:14:06.884 "nvme_iov_md": false 00:14:06.884 }, 00:14:06.884 "memory_domains": [ 00:14:06.884 { 00:14:06.884 "dma_device_id": "system", 00:14:06.884 "dma_device_type": 1 00:14:06.884 }, 00:14:06.884 { 00:14:06.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.884 "dma_device_type": 2 00:14:06.884 }, 00:14:06.884 { 00:14:06.884 "dma_device_id": "system", 00:14:06.884 "dma_device_type": 1 00:14:06.884 }, 00:14:06.884 { 00:14:06.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.884 "dma_device_type": 2 00:14:06.884 } 00:14:06.884 ], 00:14:06.884 "driver_specific": { 00:14:06.884 "raid": { 00:14:06.884 "uuid": "b50d8051-74a1-4a26-84aa-bd175dd4ee98", 00:14:06.884 "strip_size_kb": 64, 00:14:06.884 "state": "online", 00:14:06.884 "raid_level": "concat", 00:14:06.884 "superblock": false, 00:14:06.884 "num_base_bdevs": 2, 00:14:06.884 "num_base_bdevs_discovered": 2, 00:14:06.884 "num_base_bdevs_operational": 2, 00:14:06.884 "base_bdevs_list": [ 00:14:06.884 { 00:14:06.884 "name": "BaseBdev1", 00:14:06.884 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:06.884 "is_configured": true, 00:14:06.884 "data_offset": 0, 00:14:06.884 "data_size": 65536 00:14:06.884 }, 00:14:06.884 { 00:14:06.884 "name": "BaseBdev2", 00:14:06.884 "uuid": "f66b35d8-3768-47de-b769-56ba326826f6", 00:14:06.884 "is_configured": true, 00:14:06.884 "data_offset": 0, 00:14:06.884 "data_size": 65536 00:14:06.884 } 00:14:06.884 ] 00:14:06.884 } 00:14:06.884 } 00:14:06.884 }' 00:14:06.884 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.884 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:06.884 BaseBdev2' 00:14:06.884 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.884 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:06.884 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.144 "name": "BaseBdev1", 00:14:07.144 "aliases": [ 00:14:07.144 "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4" 00:14:07.144 ], 00:14:07.144 "product_name": "Malloc disk", 00:14:07.144 "block_size": 512, 00:14:07.144 "num_blocks": 65536, 00:14:07.144 "uuid": "d738ed31-f7cb-4ff4-8f6e-90d2674b5db4", 00:14:07.144 "assigned_rate_limits": { 00:14:07.144 "rw_ios_per_sec": 0, 00:14:07.144 "rw_mbytes_per_sec": 0, 00:14:07.144 "r_mbytes_per_sec": 0, 00:14:07.144 "w_mbytes_per_sec": 0 00:14:07.144 }, 00:14:07.144 "claimed": true, 00:14:07.144 "claim_type": "exclusive_write", 00:14:07.144 "zoned": false, 00:14:07.144 "supported_io_types": { 00:14:07.144 "read": true, 00:14:07.144 "write": true, 00:14:07.144 "unmap": true, 00:14:07.144 "flush": true, 00:14:07.144 "reset": true, 00:14:07.144 "nvme_admin": false, 00:14:07.144 "nvme_io": false, 00:14:07.144 "nvme_io_md": false, 00:14:07.144 "write_zeroes": true, 00:14:07.144 "zcopy": true, 00:14:07.144 "get_zone_info": false, 00:14:07.144 "zone_management": false, 00:14:07.144 "zone_append": false, 00:14:07.144 "compare": false, 00:14:07.144 "compare_and_write": false, 00:14:07.144 "abort": true, 00:14:07.144 "seek_hole": false, 00:14:07.144 "seek_data": false, 00:14:07.144 "copy": true, 00:14:07.144 "nvme_iov_md": false 00:14:07.144 }, 00:14:07.144 "memory_domains": [ 00:14:07.144 { 00:14:07.144 "dma_device_id": "system", 00:14:07.144 "dma_device_type": 1 00:14:07.144 }, 00:14:07.144 { 00:14:07.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.144 "dma_device_type": 2 00:14:07.144 } 00:14:07.144 ], 00:14:07.144 "driver_specific": {} 00:14:07.144 }' 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.144 10:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.144 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:07.403 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.660 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.660 "name": "BaseBdev2", 00:14:07.660 "aliases": [ 00:14:07.660 "f66b35d8-3768-47de-b769-56ba326826f6" 00:14:07.660 ], 00:14:07.660 "product_name": "Malloc disk", 00:14:07.660 "block_size": 512, 00:14:07.660 "num_blocks": 65536, 00:14:07.660 "uuid": "f66b35d8-3768-47de-b769-56ba326826f6", 00:14:07.660 "assigned_rate_limits": { 00:14:07.660 "rw_ios_per_sec": 0, 00:14:07.660 "rw_mbytes_per_sec": 0, 00:14:07.660 "r_mbytes_per_sec": 0, 00:14:07.660 "w_mbytes_per_sec": 0 00:14:07.660 }, 00:14:07.660 "claimed": true, 00:14:07.660 "claim_type": "exclusive_write", 00:14:07.660 "zoned": false, 00:14:07.660 "supported_io_types": { 00:14:07.660 "read": true, 00:14:07.660 "write": true, 00:14:07.660 "unmap": true, 00:14:07.660 "flush": true, 00:14:07.660 "reset": true, 00:14:07.660 "nvme_admin": false, 00:14:07.660 "nvme_io": false, 00:14:07.660 "nvme_io_md": false, 00:14:07.660 "write_zeroes": true, 00:14:07.661 "zcopy": true, 00:14:07.661 "get_zone_info": false, 00:14:07.661 "zone_management": false, 00:14:07.661 "zone_append": false, 00:14:07.661 "compare": false, 00:14:07.661 "compare_and_write": false, 00:14:07.661 "abort": true, 00:14:07.661 "seek_hole": false, 00:14:07.661 "seek_data": false, 00:14:07.661 "copy": true, 00:14:07.661 "nvme_iov_md": false 00:14:07.661 }, 00:14:07.661 "memory_domains": [ 00:14:07.661 { 00:14:07.661 "dma_device_id": "system", 00:14:07.661 "dma_device_type": 1 00:14:07.661 }, 00:14:07.661 { 00:14:07.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.661 "dma_device_type": 2 00:14:07.661 } 00:14:07.661 ], 00:14:07.661 "driver_specific": {} 00:14:07.661 }' 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.661 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.919 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:08.177 [2024-07-26 10:24:20.938724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.177 [2024-07-26 10:24:20.938747] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.177 [2024-07-26 10:24:20.938785] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.177 10:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.436 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.436 "name": "Existed_Raid", 00:14:08.436 "uuid": "b50d8051-74a1-4a26-84aa-bd175dd4ee98", 00:14:08.436 "strip_size_kb": 64, 00:14:08.436 "state": "offline", 00:14:08.436 "raid_level": "concat", 00:14:08.436 "superblock": false, 00:14:08.436 "num_base_bdevs": 2, 00:14:08.436 "num_base_bdevs_discovered": 1, 00:14:08.436 "num_base_bdevs_operational": 1, 00:14:08.436 "base_bdevs_list": [ 00:14:08.436 { 00:14:08.436 "name": null, 00:14:08.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.436 "is_configured": false, 00:14:08.436 "data_offset": 0, 00:14:08.436 "data_size": 65536 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "name": "BaseBdev2", 00:14:08.436 "uuid": "f66b35d8-3768-47de-b769-56ba326826f6", 00:14:08.436 "is_configured": true, 00:14:08.436 "data_offset": 0, 00:14:08.436 "data_size": 65536 00:14:08.436 } 00:14:08.436 ] 00:14:08.436 }' 00:14:08.436 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.436 10:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.002 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:09.002 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:09.002 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.002 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:09.261 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:09.261 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.261 10:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:09.261 [2024-07-26 10:24:22.158901] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.261 [2024-07-26 10:24:22.158945] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2196190 name Existed_Raid, state offline 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3352859 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3352859 ']' 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3352859 00:14:09.520 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3352859 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3352859' 00:14:09.779 killing process with pid 3352859 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3352859 00:14:09.779 [2024-07-26 10:24:22.476714] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3352859 00:14:09.779 [2024-07-26 10:24:22.477585] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:09.779 00:14:09.779 real 0m9.851s 00:14:09.779 user 0m17.477s 00:14:09.779 sys 0m1.873s 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.779 10:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.779 ************************************ 00:14:09.779 END TEST raid_state_function_test 00:14:09.779 ************************************ 00:14:10.037 10:24:22 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:10.037 10:24:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:10.037 10:24:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.037 10:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.037 ************************************ 00:14:10.037 START TEST raid_state_function_test_sb 00:14:10.037 ************************************ 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:10.037 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3354687 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3354687' 00:14:10.038 Process raid pid: 3354687 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3354687 /var/tmp/spdk-raid.sock 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3354687 ']' 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:10.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.038 10:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.038 [2024-07-26 10:24:22.799385] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:10.038 [2024-07-26 10:24:22.799447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:10.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.038 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:10.038 [2024-07-26 10:24:22.924989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.297 [2024-07-26 10:24:22.970301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.297 [2024-07-26 10:24:23.031577] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.297 [2024-07-26 10:24:23.031611] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.864 10:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.864 10:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:10.864 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:11.122 [2024-07-26 10:24:23.908007] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.122 [2024-07-26 10:24:23.908045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.122 [2024-07-26 10:24:23.908056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.122 [2024-07-26 10:24:23.908067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.122 10:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.381 10:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.381 "name": "Existed_Raid", 00:14:11.381 "uuid": "8725de14-f167-45d9-b5c0-68aedce8005d", 00:14:11.381 "strip_size_kb": 64, 00:14:11.381 "state": "configuring", 00:14:11.381 "raid_level": "concat", 00:14:11.381 "superblock": true, 00:14:11.381 "num_base_bdevs": 2, 00:14:11.381 "num_base_bdevs_discovered": 0, 00:14:11.381 "num_base_bdevs_operational": 2, 00:14:11.381 "base_bdevs_list": [ 00:14:11.381 { 00:14:11.381 "name": "BaseBdev1", 00:14:11.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.381 "is_configured": false, 00:14:11.381 "data_offset": 0, 00:14:11.381 "data_size": 0 00:14:11.381 }, 00:14:11.381 { 00:14:11.381 "name": "BaseBdev2", 00:14:11.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.381 "is_configured": false, 00:14:11.381 "data_offset": 0, 00:14:11.381 "data_size": 0 00:14:11.381 } 00:14:11.381 ] 00:14:11.381 }' 00:14:11.381 10:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.381 10:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.993 10:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:12.252 [2024-07-26 10:24:24.926560] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.252 [2024-07-26 10:24:24.926592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d8ad00 name Existed_Raid, state configuring 00:14:12.252 10:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:12.252 [2024-07-26 10:24:25.139128] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.252 [2024-07-26 10:24:25.139170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.252 [2024-07-26 10:24:25.139179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.252 [2024-07-26 10:24:25.139190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.510 [2024-07-26 10:24:25.373206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.510 BaseBdev1 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.510 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.769 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.029 [ 00:14:13.029 { 00:14:13.029 "name": "BaseBdev1", 00:14:13.029 "aliases": [ 00:14:13.029 "6908df2d-50d9-45ad-befc-6e842b86a147" 00:14:13.029 ], 00:14:13.029 "product_name": "Malloc disk", 00:14:13.029 "block_size": 512, 00:14:13.029 "num_blocks": 65536, 00:14:13.029 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:13.029 "assigned_rate_limits": { 00:14:13.029 "rw_ios_per_sec": 0, 00:14:13.029 "rw_mbytes_per_sec": 0, 00:14:13.029 "r_mbytes_per_sec": 0, 00:14:13.029 "w_mbytes_per_sec": 0 00:14:13.029 }, 00:14:13.029 "claimed": true, 00:14:13.029 "claim_type": "exclusive_write", 00:14:13.029 "zoned": false, 00:14:13.029 "supported_io_types": { 00:14:13.029 "read": true, 00:14:13.029 "write": true, 00:14:13.029 "unmap": true, 00:14:13.029 "flush": true, 00:14:13.029 "reset": true, 00:14:13.029 "nvme_admin": false, 00:14:13.029 "nvme_io": false, 00:14:13.029 "nvme_io_md": false, 00:14:13.029 "write_zeroes": true, 00:14:13.029 "zcopy": true, 00:14:13.029 "get_zone_info": false, 00:14:13.029 "zone_management": false, 00:14:13.029 "zone_append": false, 00:14:13.029 "compare": false, 00:14:13.029 "compare_and_write": false, 00:14:13.029 "abort": true, 00:14:13.029 "seek_hole": false, 00:14:13.029 "seek_data": false, 00:14:13.029 "copy": true, 00:14:13.029 "nvme_iov_md": false 00:14:13.029 }, 00:14:13.029 "memory_domains": [ 00:14:13.029 { 00:14:13.029 "dma_device_id": "system", 00:14:13.029 "dma_device_type": 1 00:14:13.029 }, 00:14:13.029 { 00:14:13.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.029 "dma_device_type": 2 00:14:13.029 } 00:14:13.029 ], 00:14:13.029 "driver_specific": {} 00:14:13.029 } 00:14:13.029 ] 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.029 10:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.288 10:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.288 "name": "Existed_Raid", 00:14:13.288 "uuid": "5a31c44e-17c0-4d4e-a9c7-b030d3851e4b", 00:14:13.288 "strip_size_kb": 64, 00:14:13.288 "state": "configuring", 00:14:13.288 "raid_level": "concat", 00:14:13.288 "superblock": true, 00:14:13.288 "num_base_bdevs": 2, 00:14:13.288 "num_base_bdevs_discovered": 1, 00:14:13.288 "num_base_bdevs_operational": 2, 00:14:13.288 "base_bdevs_list": [ 00:14:13.288 { 00:14:13.288 "name": "BaseBdev1", 00:14:13.288 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:13.288 "is_configured": true, 00:14:13.288 "data_offset": 2048, 00:14:13.288 "data_size": 63488 00:14:13.288 }, 00:14:13.288 { 00:14:13.288 "name": "BaseBdev2", 00:14:13.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.288 "is_configured": false, 00:14:13.288 "data_offset": 0, 00:14:13.288 "data_size": 0 00:14:13.288 } 00:14:13.288 ] 00:14:13.288 }' 00:14:13.288 10:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.288 10:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.856 10:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:14.115 [2024-07-26 10:24:26.873191] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.115 [2024-07-26 10:24:26.873233] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d8a630 name Existed_Raid, state configuring 00:14:14.115 10:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:14.375 [2024-07-26 10:24:27.101821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.375 [2024-07-26 10:24:27.103156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.375 [2024-07-26 10:24:27.103189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.375 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.634 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.634 "name": "Existed_Raid", 00:14:14.634 "uuid": "5e71b672-a19f-47b0-ab14-6ce704ef5a4c", 00:14:14.634 "strip_size_kb": 64, 00:14:14.634 "state": "configuring", 00:14:14.634 "raid_level": "concat", 00:14:14.634 "superblock": true, 00:14:14.634 "num_base_bdevs": 2, 00:14:14.634 "num_base_bdevs_discovered": 1, 00:14:14.634 "num_base_bdevs_operational": 2, 00:14:14.634 "base_bdevs_list": [ 00:14:14.634 { 00:14:14.634 "name": "BaseBdev1", 00:14:14.635 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:14.635 "is_configured": true, 00:14:14.635 "data_offset": 2048, 00:14:14.635 "data_size": 63488 00:14:14.635 }, 00:14:14.635 { 00:14:14.635 "name": "BaseBdev2", 00:14:14.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.635 "is_configured": false, 00:14:14.635 "data_offset": 0, 00:14:14.635 "data_size": 0 00:14:14.635 } 00:14:14.635 ] 00:14:14.635 }' 00:14:14.635 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.635 10:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.203 10:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.461 [2024-07-26 10:24:28.135636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.461 [2024-07-26 10:24:28.135774] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f3d190 00:14:15.461 [2024-07-26 10:24:28.135786] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:15.461 [2024-07-26 10:24:28.135946] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3f190 00:14:15.461 [2024-07-26 10:24:28.136047] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f3d190 00:14:15.461 [2024-07-26 10:24:28.136056] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f3d190 00:14:15.461 [2024-07-26 10:24:28.136150] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.461 BaseBdev2 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:15.461 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.720 [ 00:14:15.720 { 00:14:15.720 "name": "BaseBdev2", 00:14:15.720 "aliases": [ 00:14:15.720 "59b5fe8c-12c7-4ffc-adbc-d021c713207e" 00:14:15.720 ], 00:14:15.720 "product_name": "Malloc disk", 00:14:15.720 "block_size": 512, 00:14:15.720 "num_blocks": 65536, 00:14:15.720 "uuid": "59b5fe8c-12c7-4ffc-adbc-d021c713207e", 00:14:15.720 "assigned_rate_limits": { 00:14:15.720 "rw_ios_per_sec": 0, 00:14:15.720 "rw_mbytes_per_sec": 0, 00:14:15.720 "r_mbytes_per_sec": 0, 00:14:15.720 "w_mbytes_per_sec": 0 00:14:15.720 }, 00:14:15.720 "claimed": true, 00:14:15.720 "claim_type": "exclusive_write", 00:14:15.720 "zoned": false, 00:14:15.720 "supported_io_types": { 00:14:15.720 "read": true, 00:14:15.720 "write": true, 00:14:15.720 "unmap": true, 00:14:15.720 "flush": true, 00:14:15.720 "reset": true, 00:14:15.720 "nvme_admin": false, 00:14:15.720 "nvme_io": false, 00:14:15.720 "nvme_io_md": false, 00:14:15.720 "write_zeroes": true, 00:14:15.720 "zcopy": true, 00:14:15.720 "get_zone_info": false, 00:14:15.720 "zone_management": false, 00:14:15.720 "zone_append": false, 00:14:15.720 "compare": false, 00:14:15.720 "compare_and_write": false, 00:14:15.720 "abort": true, 00:14:15.720 "seek_hole": false, 00:14:15.720 "seek_data": false, 00:14:15.720 "copy": true, 00:14:15.720 "nvme_iov_md": false 00:14:15.720 }, 00:14:15.720 "memory_domains": [ 00:14:15.720 { 00:14:15.720 "dma_device_id": "system", 00:14:15.720 "dma_device_type": 1 00:14:15.720 }, 00:14:15.720 { 00:14:15.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.720 "dma_device_type": 2 00:14:15.720 } 00:14:15.720 ], 00:14:15.720 "driver_specific": {} 00:14:15.720 } 00:14:15.720 ] 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.720 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.979 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.979 "name": "Existed_Raid", 00:14:15.979 "uuid": "5e71b672-a19f-47b0-ab14-6ce704ef5a4c", 00:14:15.979 "strip_size_kb": 64, 00:14:15.979 "state": "online", 00:14:15.979 "raid_level": "concat", 00:14:15.979 "superblock": true, 00:14:15.979 "num_base_bdevs": 2, 00:14:15.979 "num_base_bdevs_discovered": 2, 00:14:15.979 "num_base_bdevs_operational": 2, 00:14:15.979 "base_bdevs_list": [ 00:14:15.979 { 00:14:15.979 "name": "BaseBdev1", 00:14:15.979 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:15.979 "is_configured": true, 00:14:15.979 "data_offset": 2048, 00:14:15.979 "data_size": 63488 00:14:15.979 }, 00:14:15.979 { 00:14:15.979 "name": "BaseBdev2", 00:14:15.979 "uuid": "59b5fe8c-12c7-4ffc-adbc-d021c713207e", 00:14:15.979 "is_configured": true, 00:14:15.979 "data_offset": 2048, 00:14:15.979 "data_size": 63488 00:14:15.979 } 00:14:15.979 ] 00:14:15.979 }' 00:14:15.979 10:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.979 10:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:16.546 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:16.805 [2024-07-26 10:24:29.607791] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:16.805 "name": "Existed_Raid", 00:14:16.805 "aliases": [ 00:14:16.805 "5e71b672-a19f-47b0-ab14-6ce704ef5a4c" 00:14:16.805 ], 00:14:16.805 "product_name": "Raid Volume", 00:14:16.805 "block_size": 512, 00:14:16.805 "num_blocks": 126976, 00:14:16.805 "uuid": "5e71b672-a19f-47b0-ab14-6ce704ef5a4c", 00:14:16.805 "assigned_rate_limits": { 00:14:16.805 "rw_ios_per_sec": 0, 00:14:16.805 "rw_mbytes_per_sec": 0, 00:14:16.805 "r_mbytes_per_sec": 0, 00:14:16.805 "w_mbytes_per_sec": 0 00:14:16.805 }, 00:14:16.805 "claimed": false, 00:14:16.805 "zoned": false, 00:14:16.805 "supported_io_types": { 00:14:16.805 "read": true, 00:14:16.805 "write": true, 00:14:16.805 "unmap": true, 00:14:16.805 "flush": true, 00:14:16.805 "reset": true, 00:14:16.805 "nvme_admin": false, 00:14:16.805 "nvme_io": false, 00:14:16.805 "nvme_io_md": false, 00:14:16.805 "write_zeroes": true, 00:14:16.805 "zcopy": false, 00:14:16.805 "get_zone_info": false, 00:14:16.805 "zone_management": false, 00:14:16.805 "zone_append": false, 00:14:16.805 "compare": false, 00:14:16.805 "compare_and_write": false, 00:14:16.805 "abort": false, 00:14:16.805 "seek_hole": false, 00:14:16.805 "seek_data": false, 00:14:16.805 "copy": false, 00:14:16.805 "nvme_iov_md": false 00:14:16.805 }, 00:14:16.805 "memory_domains": [ 00:14:16.805 { 00:14:16.805 "dma_device_id": "system", 00:14:16.805 "dma_device_type": 1 00:14:16.805 }, 00:14:16.805 { 00:14:16.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.805 "dma_device_type": 2 00:14:16.805 }, 00:14:16.805 { 00:14:16.805 "dma_device_id": "system", 00:14:16.805 "dma_device_type": 1 00:14:16.805 }, 00:14:16.805 { 00:14:16.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.805 "dma_device_type": 2 00:14:16.805 } 00:14:16.805 ], 00:14:16.805 "driver_specific": { 00:14:16.805 "raid": { 00:14:16.805 "uuid": "5e71b672-a19f-47b0-ab14-6ce704ef5a4c", 00:14:16.805 "strip_size_kb": 64, 00:14:16.805 "state": "online", 00:14:16.805 "raid_level": "concat", 00:14:16.805 "superblock": true, 00:14:16.805 "num_base_bdevs": 2, 00:14:16.805 "num_base_bdevs_discovered": 2, 00:14:16.805 "num_base_bdevs_operational": 2, 00:14:16.805 "base_bdevs_list": [ 00:14:16.805 { 00:14:16.805 "name": "BaseBdev1", 00:14:16.805 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:16.805 "is_configured": true, 00:14:16.805 "data_offset": 2048, 00:14:16.805 "data_size": 63488 00:14:16.805 }, 00:14:16.805 { 00:14:16.805 "name": "BaseBdev2", 00:14:16.805 "uuid": "59b5fe8c-12c7-4ffc-adbc-d021c713207e", 00:14:16.805 "is_configured": true, 00:14:16.805 "data_offset": 2048, 00:14:16.805 "data_size": 63488 00:14:16.805 } 00:14:16.805 ] 00:14:16.805 } 00:14:16.805 } 00:14:16.805 }' 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:16.805 BaseBdev2' 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:16.805 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.064 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.064 "name": "BaseBdev1", 00:14:17.064 "aliases": [ 00:14:17.064 "6908df2d-50d9-45ad-befc-6e842b86a147" 00:14:17.064 ], 00:14:17.064 "product_name": "Malloc disk", 00:14:17.064 "block_size": 512, 00:14:17.064 "num_blocks": 65536, 00:14:17.064 "uuid": "6908df2d-50d9-45ad-befc-6e842b86a147", 00:14:17.064 "assigned_rate_limits": { 00:14:17.064 "rw_ios_per_sec": 0, 00:14:17.064 "rw_mbytes_per_sec": 0, 00:14:17.064 "r_mbytes_per_sec": 0, 00:14:17.064 "w_mbytes_per_sec": 0 00:14:17.064 }, 00:14:17.064 "claimed": true, 00:14:17.064 "claim_type": "exclusive_write", 00:14:17.064 "zoned": false, 00:14:17.064 "supported_io_types": { 00:14:17.064 "read": true, 00:14:17.064 "write": true, 00:14:17.064 "unmap": true, 00:14:17.064 "flush": true, 00:14:17.064 "reset": true, 00:14:17.064 "nvme_admin": false, 00:14:17.064 "nvme_io": false, 00:14:17.064 "nvme_io_md": false, 00:14:17.064 "write_zeroes": true, 00:14:17.064 "zcopy": true, 00:14:17.064 "get_zone_info": false, 00:14:17.064 "zone_management": false, 00:14:17.064 "zone_append": false, 00:14:17.064 "compare": false, 00:14:17.064 "compare_and_write": false, 00:14:17.064 "abort": true, 00:14:17.064 "seek_hole": false, 00:14:17.064 "seek_data": false, 00:14:17.064 "copy": true, 00:14:17.064 "nvme_iov_md": false 00:14:17.064 }, 00:14:17.064 "memory_domains": [ 00:14:17.064 { 00:14:17.064 "dma_device_id": "system", 00:14:17.064 "dma_device_type": 1 00:14:17.064 }, 00:14:17.064 { 00:14:17.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.064 "dma_device_type": 2 00:14:17.064 } 00:14:17.064 ], 00:14:17.064 "driver_specific": {} 00:14:17.064 }' 00:14:17.064 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.064 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.322 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.322 10:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.322 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.581 "name": "BaseBdev2", 00:14:17.581 "aliases": [ 00:14:17.581 "59b5fe8c-12c7-4ffc-adbc-d021c713207e" 00:14:17.581 ], 00:14:17.581 "product_name": "Malloc disk", 00:14:17.581 "block_size": 512, 00:14:17.581 "num_blocks": 65536, 00:14:17.581 "uuid": "59b5fe8c-12c7-4ffc-adbc-d021c713207e", 00:14:17.581 "assigned_rate_limits": { 00:14:17.581 "rw_ios_per_sec": 0, 00:14:17.581 "rw_mbytes_per_sec": 0, 00:14:17.581 "r_mbytes_per_sec": 0, 00:14:17.581 "w_mbytes_per_sec": 0 00:14:17.581 }, 00:14:17.581 "claimed": true, 00:14:17.581 "claim_type": "exclusive_write", 00:14:17.581 "zoned": false, 00:14:17.581 "supported_io_types": { 00:14:17.581 "read": true, 00:14:17.581 "write": true, 00:14:17.581 "unmap": true, 00:14:17.581 "flush": true, 00:14:17.581 "reset": true, 00:14:17.581 "nvme_admin": false, 00:14:17.581 "nvme_io": false, 00:14:17.581 "nvme_io_md": false, 00:14:17.581 "write_zeroes": true, 00:14:17.581 "zcopy": true, 00:14:17.581 "get_zone_info": false, 00:14:17.581 "zone_management": false, 00:14:17.581 "zone_append": false, 00:14:17.581 "compare": false, 00:14:17.581 "compare_and_write": false, 00:14:17.581 "abort": true, 00:14:17.581 "seek_hole": false, 00:14:17.581 "seek_data": false, 00:14:17.581 "copy": true, 00:14:17.581 "nvme_iov_md": false 00:14:17.581 }, 00:14:17.581 "memory_domains": [ 00:14:17.581 { 00:14:17.581 "dma_device_id": "system", 00:14:17.581 "dma_device_type": 1 00:14:17.581 }, 00:14:17.581 { 00:14:17.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.581 "dma_device_type": 2 00:14:17.581 } 00:14:17.581 ], 00:14:17.581 "driver_specific": {} 00:14:17.581 }' 00:14:17.581 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.839 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.098 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.098 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:18.098 10:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.098 [2024-07-26 10:24:30.999250] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.098 [2024-07-26 10:24:30.999278] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.098 [2024-07-26 10:24:30.999315] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.356 "name": "Existed_Raid", 00:14:18.356 "uuid": "5e71b672-a19f-47b0-ab14-6ce704ef5a4c", 00:14:18.356 "strip_size_kb": 64, 00:14:18.356 "state": "offline", 00:14:18.356 "raid_level": "concat", 00:14:18.356 "superblock": true, 00:14:18.356 "num_base_bdevs": 2, 00:14:18.356 "num_base_bdevs_discovered": 1, 00:14:18.356 "num_base_bdevs_operational": 1, 00:14:18.356 "base_bdevs_list": [ 00:14:18.356 { 00:14:18.356 "name": null, 00:14:18.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.356 "is_configured": false, 00:14:18.356 "data_offset": 2048, 00:14:18.356 "data_size": 63488 00:14:18.356 }, 00:14:18.356 { 00:14:18.356 "name": "BaseBdev2", 00:14:18.356 "uuid": "59b5fe8c-12c7-4ffc-adbc-d021c713207e", 00:14:18.356 "is_configured": true, 00:14:18.356 "data_offset": 2048, 00:14:18.356 "data_size": 63488 00:14:18.356 } 00:14:18.356 ] 00:14:18.356 }' 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.356 10:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.923 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:18.923 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:18.923 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:18.923 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.181 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:19.181 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.181 10:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:19.440 [2024-07-26 10:24:32.187378] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.440 [2024-07-26 10:24:32.187422] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f3d190 name Existed_Raid, state offline 00:14:19.440 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:19.440 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:19.440 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.440 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3354687 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3354687 ']' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3354687 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3354687 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3354687' 00:14:19.699 killing process with pid 3354687 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3354687 00:14:19.699 [2024-07-26 10:24:32.505376] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.699 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3354687 00:14:19.699 [2024-07-26 10:24:32.506226] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.957 10:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:19.957 00:14:19.957 real 0m9.948s 00:14:19.957 user 0m17.674s 00:14:19.957 sys 0m1.869s 00:14:19.957 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.957 10:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.957 ************************************ 00:14:19.957 END TEST raid_state_function_test_sb 00:14:19.957 ************************************ 00:14:19.957 10:24:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:19.957 10:24:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:19.958 10:24:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.958 10:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 ************************************ 00:14:19.958 START TEST raid_superblock_test 00:14:19.958 ************************************ 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3356603 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3356603 /var/tmp/spdk-raid.sock 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3356603 ']' 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.958 10:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 [2024-07-26 10:24:32.825203] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:19.958 [2024-07-26 10:24:32.825265] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3356603 ] 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:20.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.217 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:20.217 [2024-07-26 10:24:32.959134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.217 [2024-07-26 10:24:33.003790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.217 [2024-07-26 10:24:33.059011] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.217 [2024-07-26 10:24:33.059039] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:21.152 malloc1 00:14:21.152 10:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.410 [2024-07-26 10:24:34.160904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.410 [2024-07-26 10:24:34.160947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.410 [2024-07-26 10:24:34.160964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xddd270 00:14:21.410 [2024-07-26 10:24:34.160976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.410 [2024-07-26 10:24:34.162333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.410 [2024-07-26 10:24:34.162362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.410 pt1 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:21.410 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:21.669 malloc2 00:14:21.669 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.928 [2024-07-26 10:24:34.622375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.928 [2024-07-26 10:24:34.622423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.928 [2024-07-26 10:24:34.622439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd992f0 00:14:21.928 [2024-07-26 10:24:34.622450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.928 [2024-07-26 10:24:34.623902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.928 [2024-07-26 10:24:34.623930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.928 pt2 00:14:21.928 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:21.928 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:21.928 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:22.188 [2024-07-26 10:24:34.851006] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.188 [2024-07-26 10:24:34.852320] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.188 [2024-07-26 10:24:34.852437] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd62f20 00:14:22.188 [2024-07-26 10:24:34.852453] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:22.188 [2024-07-26 10:24:34.852650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc42200 00:14:22.188 [2024-07-26 10:24:34.852770] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd62f20 00:14:22.188 [2024-07-26 10:24:34.852779] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd62f20 00:14:22.188 [2024-07-26 10:24:34.852882] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.188 10:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.447 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.447 "name": "raid_bdev1", 00:14:22.447 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:22.447 "strip_size_kb": 64, 00:14:22.447 "state": "online", 00:14:22.447 "raid_level": "concat", 00:14:22.447 "superblock": true, 00:14:22.447 "num_base_bdevs": 2, 00:14:22.447 "num_base_bdevs_discovered": 2, 00:14:22.447 "num_base_bdevs_operational": 2, 00:14:22.447 "base_bdevs_list": [ 00:14:22.447 { 00:14:22.447 "name": "pt1", 00:14:22.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.447 "is_configured": true, 00:14:22.447 "data_offset": 2048, 00:14:22.447 "data_size": 63488 00:14:22.447 }, 00:14:22.447 { 00:14:22.447 "name": "pt2", 00:14:22.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.447 "is_configured": true, 00:14:22.447 "data_offset": 2048, 00:14:22.447 "data_size": 63488 00:14:22.447 } 00:14:22.447 ] 00:14:22.447 }' 00:14:22.447 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.447 10:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.016 [2024-07-26 10:24:35.877909] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.016 "name": "raid_bdev1", 00:14:23.016 "aliases": [ 00:14:23.016 "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee" 00:14:23.016 ], 00:14:23.016 "product_name": "Raid Volume", 00:14:23.016 "block_size": 512, 00:14:23.016 "num_blocks": 126976, 00:14:23.016 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:23.016 "assigned_rate_limits": { 00:14:23.016 "rw_ios_per_sec": 0, 00:14:23.016 "rw_mbytes_per_sec": 0, 00:14:23.016 "r_mbytes_per_sec": 0, 00:14:23.016 "w_mbytes_per_sec": 0 00:14:23.016 }, 00:14:23.016 "claimed": false, 00:14:23.016 "zoned": false, 00:14:23.016 "supported_io_types": { 00:14:23.016 "read": true, 00:14:23.016 "write": true, 00:14:23.016 "unmap": true, 00:14:23.016 "flush": true, 00:14:23.016 "reset": true, 00:14:23.016 "nvme_admin": false, 00:14:23.016 "nvme_io": false, 00:14:23.016 "nvme_io_md": false, 00:14:23.016 "write_zeroes": true, 00:14:23.016 "zcopy": false, 00:14:23.016 "get_zone_info": false, 00:14:23.016 "zone_management": false, 00:14:23.016 "zone_append": false, 00:14:23.016 "compare": false, 00:14:23.016 "compare_and_write": false, 00:14:23.016 "abort": false, 00:14:23.016 "seek_hole": false, 00:14:23.016 "seek_data": false, 00:14:23.016 "copy": false, 00:14:23.016 "nvme_iov_md": false 00:14:23.016 }, 00:14:23.016 "memory_domains": [ 00:14:23.016 { 00:14:23.016 "dma_device_id": "system", 00:14:23.016 "dma_device_type": 1 00:14:23.016 }, 00:14:23.016 { 00:14:23.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.016 "dma_device_type": 2 00:14:23.016 }, 00:14:23.016 { 00:14:23.016 "dma_device_id": "system", 00:14:23.016 "dma_device_type": 1 00:14:23.016 }, 00:14:23.016 { 00:14:23.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.016 "dma_device_type": 2 00:14:23.016 } 00:14:23.016 ], 00:14:23.016 "driver_specific": { 00:14:23.016 "raid": { 00:14:23.016 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:23.016 "strip_size_kb": 64, 00:14:23.016 "state": "online", 00:14:23.016 "raid_level": "concat", 00:14:23.016 "superblock": true, 00:14:23.016 "num_base_bdevs": 2, 00:14:23.016 "num_base_bdevs_discovered": 2, 00:14:23.016 "num_base_bdevs_operational": 2, 00:14:23.016 "base_bdevs_list": [ 00:14:23.016 { 00:14:23.016 "name": "pt1", 00:14:23.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.016 "is_configured": true, 00:14:23.016 "data_offset": 2048, 00:14:23.016 "data_size": 63488 00:14:23.016 }, 00:14:23.016 { 00:14:23.016 "name": "pt2", 00:14:23.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.016 "is_configured": true, 00:14:23.016 "data_offset": 2048, 00:14:23.016 "data_size": 63488 00:14:23.016 } 00:14:23.016 ] 00:14:23.016 } 00:14:23.016 } 00:14:23.016 }' 00:14:23.016 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.276 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:23.276 pt2' 00:14:23.276 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.276 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:23.276 10:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.276 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.276 "name": "pt1", 00:14:23.276 "aliases": [ 00:14:23.276 "00000000-0000-0000-0000-000000000001" 00:14:23.276 ], 00:14:23.276 "product_name": "passthru", 00:14:23.276 "block_size": 512, 00:14:23.276 "num_blocks": 65536, 00:14:23.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.276 "assigned_rate_limits": { 00:14:23.276 "rw_ios_per_sec": 0, 00:14:23.276 "rw_mbytes_per_sec": 0, 00:14:23.276 "r_mbytes_per_sec": 0, 00:14:23.276 "w_mbytes_per_sec": 0 00:14:23.276 }, 00:14:23.276 "claimed": true, 00:14:23.276 "claim_type": "exclusive_write", 00:14:23.276 "zoned": false, 00:14:23.276 "supported_io_types": { 00:14:23.276 "read": true, 00:14:23.276 "write": true, 00:14:23.276 "unmap": true, 00:14:23.276 "flush": true, 00:14:23.276 "reset": true, 00:14:23.276 "nvme_admin": false, 00:14:23.276 "nvme_io": false, 00:14:23.276 "nvme_io_md": false, 00:14:23.276 "write_zeroes": true, 00:14:23.276 "zcopy": true, 00:14:23.276 "get_zone_info": false, 00:14:23.276 "zone_management": false, 00:14:23.276 "zone_append": false, 00:14:23.276 "compare": false, 00:14:23.276 "compare_and_write": false, 00:14:23.276 "abort": true, 00:14:23.276 "seek_hole": false, 00:14:23.276 "seek_data": false, 00:14:23.276 "copy": true, 00:14:23.276 "nvme_iov_md": false 00:14:23.276 }, 00:14:23.276 "memory_domains": [ 00:14:23.276 { 00:14:23.276 "dma_device_id": "system", 00:14:23.276 "dma_device_type": 1 00:14:23.276 }, 00:14:23.276 { 00:14:23.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.276 "dma_device_type": 2 00:14:23.276 } 00:14:23.276 ], 00:14:23.276 "driver_specific": { 00:14:23.276 "passthru": { 00:14:23.276 "name": "pt1", 00:14:23.276 "base_bdev_name": "malloc1" 00:14:23.276 } 00:14:23.276 } 00:14:23.276 }' 00:14:23.276 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.536 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.795 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.795 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.795 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.795 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:23.795 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.055 "name": "pt2", 00:14:24.055 "aliases": [ 00:14:24.055 "00000000-0000-0000-0000-000000000002" 00:14:24.055 ], 00:14:24.055 "product_name": "passthru", 00:14:24.055 "block_size": 512, 00:14:24.055 "num_blocks": 65536, 00:14:24.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.055 "assigned_rate_limits": { 00:14:24.055 "rw_ios_per_sec": 0, 00:14:24.055 "rw_mbytes_per_sec": 0, 00:14:24.055 "r_mbytes_per_sec": 0, 00:14:24.055 "w_mbytes_per_sec": 0 00:14:24.055 }, 00:14:24.055 "claimed": true, 00:14:24.055 "claim_type": "exclusive_write", 00:14:24.055 "zoned": false, 00:14:24.055 "supported_io_types": { 00:14:24.055 "read": true, 00:14:24.055 "write": true, 00:14:24.055 "unmap": true, 00:14:24.055 "flush": true, 00:14:24.055 "reset": true, 00:14:24.055 "nvme_admin": false, 00:14:24.055 "nvme_io": false, 00:14:24.055 "nvme_io_md": false, 00:14:24.055 "write_zeroes": true, 00:14:24.055 "zcopy": true, 00:14:24.055 "get_zone_info": false, 00:14:24.055 "zone_management": false, 00:14:24.055 "zone_append": false, 00:14:24.055 "compare": false, 00:14:24.055 "compare_and_write": false, 00:14:24.055 "abort": true, 00:14:24.055 "seek_hole": false, 00:14:24.055 "seek_data": false, 00:14:24.055 "copy": true, 00:14:24.055 "nvme_iov_md": false 00:14:24.055 }, 00:14:24.055 "memory_domains": [ 00:14:24.055 { 00:14:24.055 "dma_device_id": "system", 00:14:24.055 "dma_device_type": 1 00:14:24.055 }, 00:14:24.055 { 00:14:24.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.055 "dma_device_type": 2 00:14:24.055 } 00:14:24.055 ], 00:14:24.055 "driver_specific": { 00:14:24.055 "passthru": { 00:14:24.055 "name": "pt2", 00:14:24.055 "base_bdev_name": "malloc2" 00:14:24.055 } 00:14:24.055 } 00:14:24.055 }' 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.055 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.314 10:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:24.314 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:24.595 [2024-07-26 10:24:37.301619] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.595 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=19f0ecdd-0358-4a0b-9a05-321e2b0a15ee 00:14:24.595 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 19f0ecdd-0358-4a0b-9a05-321e2b0a15ee ']' 00:14:24.595 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.178 [2024-07-26 10:24:37.802717] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.178 [2024-07-26 10:24:37.802739] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.178 [2024-07-26 10:24:37.802796] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.178 [2024-07-26 10:24:37.802835] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.178 [2024-07-26 10:24:37.802845] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd62f20 name raid_bdev1, state offline 00:14:25.178 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.178 10:24:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:25.178 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:25.178 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:25.178 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.178 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:25.747 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:25.747 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:26.006 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:26.006 10:24:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.266 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:26.525 [2024-07-26 10:24:39.238435] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:26.525 [2024-07-26 10:24:39.239703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:26.525 [2024-07-26 10:24:39.239759] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:26.525 [2024-07-26 10:24:39.239798] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:26.525 [2024-07-26 10:24:39.239815] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.525 [2024-07-26 10:24:39.239823] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd98a60 name raid_bdev1, state configuring 00:14:26.525 request: 00:14:26.525 { 00:14:26.525 "name": "raid_bdev1", 00:14:26.525 "raid_level": "concat", 00:14:26.525 "base_bdevs": [ 00:14:26.525 "malloc1", 00:14:26.525 "malloc2" 00:14:26.526 ], 00:14:26.526 "strip_size_kb": 64, 00:14:26.526 "superblock": false, 00:14:26.526 "method": "bdev_raid_create", 00:14:26.526 "req_id": 1 00:14:26.526 } 00:14:26.526 Got JSON-RPC error response 00:14:26.526 response: 00:14:26.526 { 00:14:26.526 "code": -17, 00:14:26.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:26.526 } 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.526 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:26.785 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:26.785 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:26.785 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.354 [2024-07-26 10:24:39.964284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.354 [2024-07-26 10:24:39.964330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.354 [2024-07-26 10:24:39.964347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc2cd90 00:14:27.354 [2024-07-26 10:24:39.964359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.354 [2024-07-26 10:24:39.965816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.354 [2024-07-26 10:24:39.965846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.354 [2024-07-26 10:24:39.965913] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:27.354 [2024-07-26 10:24:39.965936] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.354 pt1 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.354 10:24:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.354 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.354 "name": "raid_bdev1", 00:14:27.354 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:27.354 "strip_size_kb": 64, 00:14:27.354 "state": "configuring", 00:14:27.354 "raid_level": "concat", 00:14:27.354 "superblock": true, 00:14:27.354 "num_base_bdevs": 2, 00:14:27.354 "num_base_bdevs_discovered": 1, 00:14:27.354 "num_base_bdevs_operational": 2, 00:14:27.354 "base_bdevs_list": [ 00:14:27.354 { 00:14:27.354 "name": "pt1", 00:14:27.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.354 "is_configured": true, 00:14:27.354 "data_offset": 2048, 00:14:27.354 "data_size": 63488 00:14:27.354 }, 00:14:27.354 { 00:14:27.354 "name": null, 00:14:27.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.354 "is_configured": false, 00:14:27.354 "data_offset": 2048, 00:14:27.354 "data_size": 63488 00:14:27.354 } 00:14:27.354 ] 00:14:27.354 }' 00:14:27.354 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.354 10:24:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.923 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:14:27.923 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:27.923 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:27.923 10:24:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.493 [2024-07-26 10:24:41.263708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.493 [2024-07-26 10:24:41.263756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.493 [2024-07-26 10:24:41.263773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc2c100 00:14:28.493 [2024-07-26 10:24:41.263785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.493 [2024-07-26 10:24:41.264097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.493 [2024-07-26 10:24:41.264114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.493 [2024-07-26 10:24:41.264178] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.493 [2024-07-26 10:24:41.264196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.493 [2024-07-26 10:24:41.264286] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd66510 00:14:28.493 [2024-07-26 10:24:41.264296] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.493 [2024-07-26 10:24:41.264449] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc427f0 00:14:28.493 [2024-07-26 10:24:41.264558] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd66510 00:14:28.493 [2024-07-26 10:24:41.264567] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd66510 00:14:28.493 [2024-07-26 10:24:41.264651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.493 pt2 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.493 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.753 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.753 "name": "raid_bdev1", 00:14:28.753 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:28.753 "strip_size_kb": 64, 00:14:28.753 "state": "online", 00:14:28.753 "raid_level": "concat", 00:14:28.753 "superblock": true, 00:14:28.753 "num_base_bdevs": 2, 00:14:28.753 "num_base_bdevs_discovered": 2, 00:14:28.753 "num_base_bdevs_operational": 2, 00:14:28.753 "base_bdevs_list": [ 00:14:28.753 { 00:14:28.753 "name": "pt1", 00:14:28.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.753 "is_configured": true, 00:14:28.753 "data_offset": 2048, 00:14:28.753 "data_size": 63488 00:14:28.753 }, 00:14:28.753 { 00:14:28.753 "name": "pt2", 00:14:28.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.753 "is_configured": true, 00:14:28.753 "data_offset": 2048, 00:14:28.753 "data_size": 63488 00:14:28.753 } 00:14:28.753 ] 00:14:28.753 }' 00:14:28.753 10:24:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.753 10:24:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.322 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.581 [2024-07-26 10:24:42.302895] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.581 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.581 "name": "raid_bdev1", 00:14:29.581 "aliases": [ 00:14:29.581 "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee" 00:14:29.581 ], 00:14:29.581 "product_name": "Raid Volume", 00:14:29.581 "block_size": 512, 00:14:29.581 "num_blocks": 126976, 00:14:29.581 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:29.581 "assigned_rate_limits": { 00:14:29.581 "rw_ios_per_sec": 0, 00:14:29.581 "rw_mbytes_per_sec": 0, 00:14:29.581 "r_mbytes_per_sec": 0, 00:14:29.581 "w_mbytes_per_sec": 0 00:14:29.581 }, 00:14:29.581 "claimed": false, 00:14:29.581 "zoned": false, 00:14:29.581 "supported_io_types": { 00:14:29.581 "read": true, 00:14:29.581 "write": true, 00:14:29.581 "unmap": true, 00:14:29.581 "flush": true, 00:14:29.581 "reset": true, 00:14:29.581 "nvme_admin": false, 00:14:29.581 "nvme_io": false, 00:14:29.581 "nvme_io_md": false, 00:14:29.581 "write_zeroes": true, 00:14:29.581 "zcopy": false, 00:14:29.581 "get_zone_info": false, 00:14:29.581 "zone_management": false, 00:14:29.581 "zone_append": false, 00:14:29.581 "compare": false, 00:14:29.581 "compare_and_write": false, 00:14:29.581 "abort": false, 00:14:29.581 "seek_hole": false, 00:14:29.581 "seek_data": false, 00:14:29.581 "copy": false, 00:14:29.581 "nvme_iov_md": false 00:14:29.581 }, 00:14:29.581 "memory_domains": [ 00:14:29.581 { 00:14:29.581 "dma_device_id": "system", 00:14:29.581 "dma_device_type": 1 00:14:29.581 }, 00:14:29.581 { 00:14:29.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.581 "dma_device_type": 2 00:14:29.581 }, 00:14:29.581 { 00:14:29.581 "dma_device_id": "system", 00:14:29.581 "dma_device_type": 1 00:14:29.581 }, 00:14:29.581 { 00:14:29.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.581 "dma_device_type": 2 00:14:29.581 } 00:14:29.581 ], 00:14:29.581 "driver_specific": { 00:14:29.581 "raid": { 00:14:29.581 "uuid": "19f0ecdd-0358-4a0b-9a05-321e2b0a15ee", 00:14:29.581 "strip_size_kb": 64, 00:14:29.582 "state": "online", 00:14:29.582 "raid_level": "concat", 00:14:29.582 "superblock": true, 00:14:29.582 "num_base_bdevs": 2, 00:14:29.582 "num_base_bdevs_discovered": 2, 00:14:29.582 "num_base_bdevs_operational": 2, 00:14:29.582 "base_bdevs_list": [ 00:14:29.582 { 00:14:29.582 "name": "pt1", 00:14:29.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.582 "is_configured": true, 00:14:29.582 "data_offset": 2048, 00:14:29.582 "data_size": 63488 00:14:29.582 }, 00:14:29.582 { 00:14:29.582 "name": "pt2", 00:14:29.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.582 "is_configured": true, 00:14:29.582 "data_offset": 2048, 00:14:29.582 "data_size": 63488 00:14:29.582 } 00:14:29.582 ] 00:14:29.582 } 00:14:29.582 } 00:14:29.582 }' 00:14:29.582 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.582 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:29.582 pt2' 00:14:29.582 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.582 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:29.582 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.841 "name": "pt1", 00:14:29.841 "aliases": [ 00:14:29.841 "00000000-0000-0000-0000-000000000001" 00:14:29.841 ], 00:14:29.841 "product_name": "passthru", 00:14:29.841 "block_size": 512, 00:14:29.841 "num_blocks": 65536, 00:14:29.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.841 "assigned_rate_limits": { 00:14:29.841 "rw_ios_per_sec": 0, 00:14:29.841 "rw_mbytes_per_sec": 0, 00:14:29.841 "r_mbytes_per_sec": 0, 00:14:29.841 "w_mbytes_per_sec": 0 00:14:29.841 }, 00:14:29.841 "claimed": true, 00:14:29.841 "claim_type": "exclusive_write", 00:14:29.841 "zoned": false, 00:14:29.841 "supported_io_types": { 00:14:29.841 "read": true, 00:14:29.841 "write": true, 00:14:29.841 "unmap": true, 00:14:29.841 "flush": true, 00:14:29.841 "reset": true, 00:14:29.841 "nvme_admin": false, 00:14:29.841 "nvme_io": false, 00:14:29.841 "nvme_io_md": false, 00:14:29.841 "write_zeroes": true, 00:14:29.841 "zcopy": true, 00:14:29.841 "get_zone_info": false, 00:14:29.841 "zone_management": false, 00:14:29.841 "zone_append": false, 00:14:29.841 "compare": false, 00:14:29.841 "compare_and_write": false, 00:14:29.841 "abort": true, 00:14:29.841 "seek_hole": false, 00:14:29.841 "seek_data": false, 00:14:29.841 "copy": true, 00:14:29.841 "nvme_iov_md": false 00:14:29.841 }, 00:14:29.841 "memory_domains": [ 00:14:29.841 { 00:14:29.841 "dma_device_id": "system", 00:14:29.841 "dma_device_type": 1 00:14:29.841 }, 00:14:29.841 { 00:14:29.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.841 "dma_device_type": 2 00:14:29.841 } 00:14:29.841 ], 00:14:29.841 "driver_specific": { 00:14:29.841 "passthru": { 00:14:29.841 "name": "pt1", 00:14:29.841 "base_bdev_name": "malloc1" 00:14:29.841 } 00:14:29.841 } 00:14:29.841 }' 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.841 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:30.100 10:24:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.358 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.358 "name": "pt2", 00:14:30.358 "aliases": [ 00:14:30.358 "00000000-0000-0000-0000-000000000002" 00:14:30.358 ], 00:14:30.358 "product_name": "passthru", 00:14:30.358 "block_size": 512, 00:14:30.358 "num_blocks": 65536, 00:14:30.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.358 "assigned_rate_limits": { 00:14:30.358 "rw_ios_per_sec": 0, 00:14:30.358 "rw_mbytes_per_sec": 0, 00:14:30.358 "r_mbytes_per_sec": 0, 00:14:30.358 "w_mbytes_per_sec": 0 00:14:30.358 }, 00:14:30.358 "claimed": true, 00:14:30.358 "claim_type": "exclusive_write", 00:14:30.358 "zoned": false, 00:14:30.358 "supported_io_types": { 00:14:30.358 "read": true, 00:14:30.358 "write": true, 00:14:30.358 "unmap": true, 00:14:30.358 "flush": true, 00:14:30.358 "reset": true, 00:14:30.358 "nvme_admin": false, 00:14:30.358 "nvme_io": false, 00:14:30.358 "nvme_io_md": false, 00:14:30.358 "write_zeroes": true, 00:14:30.358 "zcopy": true, 00:14:30.358 "get_zone_info": false, 00:14:30.358 "zone_management": false, 00:14:30.358 "zone_append": false, 00:14:30.358 "compare": false, 00:14:30.358 "compare_and_write": false, 00:14:30.358 "abort": true, 00:14:30.358 "seek_hole": false, 00:14:30.358 "seek_data": false, 00:14:30.358 "copy": true, 00:14:30.358 "nvme_iov_md": false 00:14:30.358 }, 00:14:30.358 "memory_domains": [ 00:14:30.358 { 00:14:30.358 "dma_device_id": "system", 00:14:30.358 "dma_device_type": 1 00:14:30.358 }, 00:14:30.358 { 00:14:30.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.358 "dma_device_type": 2 00:14:30.358 } 00:14:30.358 ], 00:14:30.358 "driver_specific": { 00:14:30.358 "passthru": { 00:14:30.358 "name": "pt2", 00:14:30.358 "base_bdev_name": "malloc2" 00:14:30.358 } 00:14:30.358 } 00:14:30.358 }' 00:14:30.358 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.358 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.358 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.358 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.616 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:30.874 [2024-07-26 10:24:43.674472] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 19f0ecdd-0358-4a0b-9a05-321e2b0a15ee '!=' 19f0ecdd-0358-4a0b-9a05-321e2b0a15ee ']' 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3356603 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3356603 ']' 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3356603 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3356603 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3356603' 00:14:30.874 killing process with pid 3356603 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3356603 00:14:30.874 [2024-07-26 10:24:43.753513] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.874 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3356603 00:14:30.874 [2024-07-26 10:24:43.753566] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.874 [2024-07-26 10:24:43.753608] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.874 [2024-07-26 10:24:43.753619] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd66510 name raid_bdev1, state offline 00:14:30.874 [2024-07-26 10:24:43.769678] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.134 10:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:31.134 00:14:31.134 real 0m11.182s 00:14:31.134 user 0m20.091s 00:14:31.134 sys 0m1.963s 00:14:31.134 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.134 10:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.134 ************************************ 00:14:31.134 END TEST raid_superblock_test 00:14:31.134 ************************************ 00:14:31.134 10:24:43 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:14:31.134 10:24:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:31.134 10:24:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.134 10:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.134 ************************************ 00:14:31.134 START TEST raid_read_error_test 00:14:31.134 ************************************ 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:31.134 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NVj0HScuTb 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3358829 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3358829 /var/tmp/spdk-raid.sock 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3358829 ']' 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.393 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.393 [2024-07-26 10:24:44.099112] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:31.393 [2024-07-26 10:24:44.099184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358829 ] 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:31.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.393 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:31.393 [2024-07-26 10:24:44.232896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.393 [2024-07-26 10:24:44.276024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.651 [2024-07-26 10:24:44.336826] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.651 [2024-07-26 10:24:44.336862] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.218 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.218 10:24:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:32.218 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:32.218 10:24:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.478 BaseBdev1_malloc 00:14:32.478 10:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:32.736 true 00:14:32.736 10:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:32.995 [2024-07-26 10:24:45.669231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:32.995 [2024-07-26 10:24:45.669274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.995 [2024-07-26 10:24:45.669291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ff7c0 00:14:32.995 [2024-07-26 10:24:45.669302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.995 [2024-07-26 10:24:45.670751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.995 [2024-07-26 10:24:45.670777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.995 BaseBdev1 00:14:32.995 10:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:32.996 10:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.254 BaseBdev2_malloc 00:14:33.254 10:24:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:33.254 true 00:14:33.254 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:33.513 [2024-07-26 10:24:46.359385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:33.513 [2024-07-26 10:24:46.359423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.513 [2024-07-26 10:24:46.359441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22a6960 00:14:33.513 [2024-07-26 10:24:46.359452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.513 [2024-07-26 10:24:46.360711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.513 [2024-07-26 10:24:46.360737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.513 BaseBdev2 00:14:33.513 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:33.772 [2024-07-26 10:24:46.588009] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.772 [2024-07-26 10:24:46.589074] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.772 [2024-07-26 10:24:46.589233] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x214d860 00:14:33.772 [2024-07-26 10:24:46.589246] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.772 [2024-07-26 10:24:46.589414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22a6280 00:14:33.772 [2024-07-26 10:24:46.589535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x214d860 00:14:33.772 [2024-07-26 10:24:46.589544] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x214d860 00:14:33.772 [2024-07-26 10:24:46.589644] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.772 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.031 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.031 "name": "raid_bdev1", 00:14:34.031 "uuid": "5eb73aaf-e59b-47e2-bcc8-aaf1f551922c", 00:14:34.031 "strip_size_kb": 64, 00:14:34.031 "state": "online", 00:14:34.031 "raid_level": "concat", 00:14:34.031 "superblock": true, 00:14:34.031 "num_base_bdevs": 2, 00:14:34.031 "num_base_bdevs_discovered": 2, 00:14:34.031 "num_base_bdevs_operational": 2, 00:14:34.031 "base_bdevs_list": [ 00:14:34.031 { 00:14:34.031 "name": "BaseBdev1", 00:14:34.031 "uuid": "64d4ffd2-b694-5f4e-a13a-b89a05e14a4a", 00:14:34.031 "is_configured": true, 00:14:34.031 "data_offset": 2048, 00:14:34.031 "data_size": 63488 00:14:34.031 }, 00:14:34.031 { 00:14:34.031 "name": "BaseBdev2", 00:14:34.031 "uuid": "0f7f6a12-a666-5ca6-bafa-edfb408d73a8", 00:14:34.031 "is_configured": true, 00:14:34.031 "data_offset": 2048, 00:14:34.031 "data_size": 63488 00:14:34.031 } 00:14:34.031 ] 00:14:34.031 }' 00:14:34.031 10:24:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.031 10:24:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.598 10:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:34.598 10:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:34.857 [2024-07-26 10:24:47.514691] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22a8d10 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.793 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.794 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.053 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.053 "name": "raid_bdev1", 00:14:36.053 "uuid": "5eb73aaf-e59b-47e2-bcc8-aaf1f551922c", 00:14:36.053 "strip_size_kb": 64, 00:14:36.053 "state": "online", 00:14:36.053 "raid_level": "concat", 00:14:36.053 "superblock": true, 00:14:36.053 "num_base_bdevs": 2, 00:14:36.053 "num_base_bdevs_discovered": 2, 00:14:36.053 "num_base_bdevs_operational": 2, 00:14:36.053 "base_bdevs_list": [ 00:14:36.053 { 00:14:36.053 "name": "BaseBdev1", 00:14:36.053 "uuid": "64d4ffd2-b694-5f4e-a13a-b89a05e14a4a", 00:14:36.053 "is_configured": true, 00:14:36.053 "data_offset": 2048, 00:14:36.053 "data_size": 63488 00:14:36.053 }, 00:14:36.053 { 00:14:36.053 "name": "BaseBdev2", 00:14:36.053 "uuid": "0f7f6a12-a666-5ca6-bafa-edfb408d73a8", 00:14:36.053 "is_configured": true, 00:14:36.053 "data_offset": 2048, 00:14:36.053 "data_size": 63488 00:14:36.053 } 00:14:36.053 ] 00:14:36.053 }' 00:14:36.053 10:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.053 10:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.621 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:36.880 [2024-07-26 10:24:49.572221] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.880 [2024-07-26 10:24:49.572250] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.880 [2024-07-26 10:24:49.575165] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.880 [2024-07-26 10:24:49.575194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.880 [2024-07-26 10:24:49.575218] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.880 [2024-07-26 10:24:49.575228] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x214d860 name raid_bdev1, state offline 00:14:36.880 0 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3358829 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3358829 ']' 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3358829 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3358829 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3358829' 00:14:36.880 killing process with pid 3358829 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3358829 00:14:36.880 [2024-07-26 10:24:49.641733] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.880 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3358829 00:14:36.880 [2024-07-26 10:24:49.651479] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NVj0HScuTb 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:14:37.140 00:14:37.140 real 0m5.820s 00:14:37.140 user 0m9.010s 00:14:37.140 sys 0m1.016s 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.140 10:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.140 ************************************ 00:14:37.140 END TEST raid_read_error_test 00:14:37.140 ************************************ 00:14:37.140 10:24:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:14:37.140 10:24:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:37.140 10:24:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.140 10:24:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.140 ************************************ 00:14:37.140 START TEST raid_write_error_test 00:14:37.140 ************************************ 00:14:37.140 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:14:37.140 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:14:37.140 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.YcCMvE5zAj 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3359853 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3359853 /var/tmp/spdk-raid.sock 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3359853 ']' 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:37.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.141 10:24:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.141 [2024-07-26 10:24:50.004074] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:37.141 [2024-07-26 10:24:50.004134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3359853 ] 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:37.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.400 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:37.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:37.401 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:37.401 [2024-07-26 10:24:50.140358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.401 [2024-07-26 10:24:50.184519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.401 [2024-07-26 10:24:50.242787] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.401 [2024-07-26 10:24:50.242816] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.337 10:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.337 10:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:38.337 10:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:38.337 10:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:38.337 BaseBdev1_malloc 00:14:38.337 10:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:38.595 true 00:14:38.595 10:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:38.854 [2024-07-26 10:24:51.553206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:38.854 [2024-07-26 10:24:51.553247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.854 [2024-07-26 10:24:51.553264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a057c0 00:14:38.854 [2024-07-26 10:24:51.553275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.854 [2024-07-26 10:24:51.554796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.855 [2024-07-26 10:24:51.554824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.855 BaseBdev1 00:14:38.855 10:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:38.855 10:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.114 BaseBdev2_malloc 00:14:39.114 10:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:39.373 true 00:14:39.373 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.373 [2024-07-26 10:24:52.243669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.373 [2024-07-26 10:24:52.243710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.373 [2024-07-26 10:24:52.243729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ac960 00:14:39.373 [2024-07-26 10:24:52.243740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.373 [2024-07-26 10:24:52.244985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.373 [2024-07-26 10:24:52.245015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.373 BaseBdev2 00:14:39.373 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:39.632 [2024-07-26 10:24:52.468279] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.632 [2024-07-26 10:24:52.469326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.632 [2024-07-26 10:24:52.469473] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1853860 00:14:39.632 [2024-07-26 10:24:52.469484] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.632 [2024-07-26 10:24:52.469647] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19ac280 00:14:39.632 [2024-07-26 10:24:52.469765] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1853860 00:14:39.632 [2024-07-26 10:24:52.469774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1853860 00:14:39.632 [2024-07-26 10:24:52.469874] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.632 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:39.632 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.632 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.632 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.633 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.892 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.892 "name": "raid_bdev1", 00:14:39.892 "uuid": "7f4a2adb-7979-44c6-9ec1-318ddf2c9734", 00:14:39.892 "strip_size_kb": 64, 00:14:39.892 "state": "online", 00:14:39.892 "raid_level": "concat", 00:14:39.892 "superblock": true, 00:14:39.892 "num_base_bdevs": 2, 00:14:39.892 "num_base_bdevs_discovered": 2, 00:14:39.892 "num_base_bdevs_operational": 2, 00:14:39.892 "base_bdevs_list": [ 00:14:39.892 { 00:14:39.892 "name": "BaseBdev1", 00:14:39.892 "uuid": "ba428f58-b5ba-555d-855e-42af6a004995", 00:14:39.892 "is_configured": true, 00:14:39.892 "data_offset": 2048, 00:14:39.892 "data_size": 63488 00:14:39.892 }, 00:14:39.892 { 00:14:39.892 "name": "BaseBdev2", 00:14:39.892 "uuid": "749ee54f-70d4-515b-a0fb-c26f338638fb", 00:14:39.892 "is_configured": true, 00:14:39.892 "data_offset": 2048, 00:14:39.892 "data_size": 63488 00:14:39.892 } 00:14:39.892 ] 00:14:39.892 }' 00:14:39.892 10:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.892 10:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.460 10:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:40.460 10:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:40.717 [2024-07-26 10:24:53.507264] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19aed10 00:14:41.651 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:41.910 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:41.910 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:14:41.910 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:41.910 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.169 10:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.169 10:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.169 "name": "raid_bdev1", 00:14:42.169 "uuid": "7f4a2adb-7979-44c6-9ec1-318ddf2c9734", 00:14:42.169 "strip_size_kb": 64, 00:14:42.169 "state": "online", 00:14:42.169 "raid_level": "concat", 00:14:42.169 "superblock": true, 00:14:42.169 "num_base_bdevs": 2, 00:14:42.169 "num_base_bdevs_discovered": 2, 00:14:42.169 "num_base_bdevs_operational": 2, 00:14:42.169 "base_bdevs_list": [ 00:14:42.169 { 00:14:42.169 "name": "BaseBdev1", 00:14:42.169 "uuid": "ba428f58-b5ba-555d-855e-42af6a004995", 00:14:42.169 "is_configured": true, 00:14:42.169 "data_offset": 2048, 00:14:42.169 "data_size": 63488 00:14:42.169 }, 00:14:42.169 { 00:14:42.169 "name": "BaseBdev2", 00:14:42.169 "uuid": "749ee54f-70d4-515b-a0fb-c26f338638fb", 00:14:42.169 "is_configured": true, 00:14:42.169 "data_offset": 2048, 00:14:42.169 "data_size": 63488 00:14:42.169 } 00:14:42.169 ] 00:14:42.169 }' 00:14:42.169 10:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.169 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.737 10:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:42.996 [2024-07-26 10:24:55.802935] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.996 [2024-07-26 10:24:55.802973] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.997 [2024-07-26 10:24:55.805879] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.997 [2024-07-26 10:24:55.805906] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.997 [2024-07-26 10:24:55.805931] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.997 [2024-07-26 10:24:55.805941] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1853860 name raid_bdev1, state offline 00:14:42.997 0 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3359853 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3359853 ']' 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3359853 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3359853 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3359853' 00:14:42.997 killing process with pid 3359853 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3359853 00:14:42.997 [2024-07-26 10:24:55.881196] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.997 10:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3359853 00:14:42.997 [2024-07-26 10:24:55.890638] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.YcCMvE5zAj 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.44 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.44 != \0\.\0\0 ]] 00:14:43.256 00:14:43.256 real 0m6.155s 00:14:43.256 user 0m9.768s 00:14:43.256 sys 0m1.068s 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.256 10:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.256 ************************************ 00:14:43.256 END TEST raid_write_error_test 00:14:43.256 ************************************ 00:14:43.256 10:24:56 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:43.256 10:24:56 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:43.256 10:24:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:43.256 10:24:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.256 10:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.516 ************************************ 00:14:43.516 START TEST raid_state_function_test 00:14:43.516 ************************************ 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:43.516 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3361001 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3361001' 00:14:43.517 Process raid pid: 3361001 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3361001 /var/tmp/spdk-raid.sock 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3361001 ']' 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.517 10:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.517 [2024-07-26 10:24:56.234579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:43.517 [2024-07-26 10:24:56.234636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:43.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:43.517 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:43.517 [2024-07-26 10:24:56.369966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.517 [2024-07-26 10:24:56.413269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.777 [2024-07-26 10:24:56.475390] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.777 [2024-07-26 10:24:56.475443] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.345 10:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.345 10:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:44.345 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.604 [2024-07-26 10:24:57.348494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.604 [2024-07-26 10:24:57.348534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.604 [2024-07-26 10:24:57.348544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.604 [2024-07-26 10:24:57.348555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.604 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.864 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.864 "name": "Existed_Raid", 00:14:44.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.864 "strip_size_kb": 0, 00:14:44.864 "state": "configuring", 00:14:44.864 "raid_level": "raid1", 00:14:44.864 "superblock": false, 00:14:44.864 "num_base_bdevs": 2, 00:14:44.864 "num_base_bdevs_discovered": 0, 00:14:44.864 "num_base_bdevs_operational": 2, 00:14:44.864 "base_bdevs_list": [ 00:14:44.864 { 00:14:44.864 "name": "BaseBdev1", 00:14:44.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.864 "is_configured": false, 00:14:44.864 "data_offset": 0, 00:14:44.864 "data_size": 0 00:14:44.864 }, 00:14:44.864 { 00:14:44.864 "name": "BaseBdev2", 00:14:44.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.864 "is_configured": false, 00:14:44.864 "data_offset": 0, 00:14:44.864 "data_size": 0 00:14:44.864 } 00:14:44.864 ] 00:14:44.864 }' 00:14:44.864 10:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.864 10:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.432 10:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.691 [2024-07-26 10:24:58.383115] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.691 [2024-07-26 10:24:58.383152] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1278ce0 name Existed_Raid, state configuring 00:14:45.691 10:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.950 [2024-07-26 10:24:58.611711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.950 [2024-07-26 10:24:58.611738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.950 [2024-07-26 10:24:58.611747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.950 [2024-07-26 10:24:58.611758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.950 10:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.950 [2024-07-26 10:24:58.849774] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.950 BaseBdev1 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.209 10:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.209 10:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.469 [ 00:14:46.469 { 00:14:46.469 "name": "BaseBdev1", 00:14:46.469 "aliases": [ 00:14:46.469 "44a86de7-282e-4b7d-8090-2087e04bfb84" 00:14:46.469 ], 00:14:46.469 "product_name": "Malloc disk", 00:14:46.469 "block_size": 512, 00:14:46.469 "num_blocks": 65536, 00:14:46.469 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:46.469 "assigned_rate_limits": { 00:14:46.469 "rw_ios_per_sec": 0, 00:14:46.469 "rw_mbytes_per_sec": 0, 00:14:46.469 "r_mbytes_per_sec": 0, 00:14:46.469 "w_mbytes_per_sec": 0 00:14:46.469 }, 00:14:46.469 "claimed": true, 00:14:46.469 "claim_type": "exclusive_write", 00:14:46.469 "zoned": false, 00:14:46.469 "supported_io_types": { 00:14:46.469 "read": true, 00:14:46.469 "write": true, 00:14:46.469 "unmap": true, 00:14:46.469 "flush": true, 00:14:46.469 "reset": true, 00:14:46.469 "nvme_admin": false, 00:14:46.469 "nvme_io": false, 00:14:46.469 "nvme_io_md": false, 00:14:46.469 "write_zeroes": true, 00:14:46.469 "zcopy": true, 00:14:46.469 "get_zone_info": false, 00:14:46.469 "zone_management": false, 00:14:46.469 "zone_append": false, 00:14:46.469 "compare": false, 00:14:46.469 "compare_and_write": false, 00:14:46.469 "abort": true, 00:14:46.469 "seek_hole": false, 00:14:46.469 "seek_data": false, 00:14:46.469 "copy": true, 00:14:46.469 "nvme_iov_md": false 00:14:46.469 }, 00:14:46.469 "memory_domains": [ 00:14:46.469 { 00:14:46.469 "dma_device_id": "system", 00:14:46.469 "dma_device_type": 1 00:14:46.469 }, 00:14:46.469 { 00:14:46.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.469 "dma_device_type": 2 00:14:46.469 } 00:14:46.469 ], 00:14:46.469 "driver_specific": {} 00:14:46.469 } 00:14:46.469 ] 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.469 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.728 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.729 "name": "Existed_Raid", 00:14:46.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.729 "strip_size_kb": 0, 00:14:46.729 "state": "configuring", 00:14:46.729 "raid_level": "raid1", 00:14:46.729 "superblock": false, 00:14:46.729 "num_base_bdevs": 2, 00:14:46.729 "num_base_bdevs_discovered": 1, 00:14:46.729 "num_base_bdevs_operational": 2, 00:14:46.729 "base_bdevs_list": [ 00:14:46.729 { 00:14:46.729 "name": "BaseBdev1", 00:14:46.729 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:46.729 "is_configured": true, 00:14:46.729 "data_offset": 0, 00:14:46.729 "data_size": 65536 00:14:46.729 }, 00:14:46.729 { 00:14:46.729 "name": "BaseBdev2", 00:14:46.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.729 "is_configured": false, 00:14:46.729 "data_offset": 0, 00:14:46.729 "data_size": 0 00:14:46.729 } 00:14:46.729 ] 00:14:46.729 }' 00:14:46.729 10:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.729 10:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.296 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.555 [2024-07-26 10:25:00.317810] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.555 [2024-07-26 10:25:00.317847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1278610 name Existed_Raid, state configuring 00:14:47.555 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:47.814 [2024-07-26 10:25:00.546434] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.814 [2024-07-26 10:25:00.547750] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.814 [2024-07-26 10:25:00.547780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.814 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.073 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.074 "name": "Existed_Raid", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "strip_size_kb": 0, 00:14:48.074 "state": "configuring", 00:14:48.074 "raid_level": "raid1", 00:14:48.074 "superblock": false, 00:14:48.074 "num_base_bdevs": 2, 00:14:48.074 "num_base_bdevs_discovered": 1, 00:14:48.074 "num_base_bdevs_operational": 2, 00:14:48.074 "base_bdevs_list": [ 00:14:48.074 { 00:14:48.074 "name": "BaseBdev1", 00:14:48.074 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:48.074 "is_configured": true, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 65536 00:14:48.074 }, 00:14:48.074 { 00:14:48.074 "name": "BaseBdev2", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "is_configured": false, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 0 00:14:48.074 } 00:14:48.074 ] 00:14:48.074 }' 00:14:48.074 10:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.074 10:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.642 10:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.901 [2024-07-26 10:25:01.580318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.901 [2024-07-26 10:25:01.580351] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x142b0f0 00:14:48.901 [2024-07-26 10:25:01.580358] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:48.901 [2024-07-26 10:25:01.580593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x142d220 00:14:48.901 [2024-07-26 10:25:01.580704] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x142b0f0 00:14:48.901 [2024-07-26 10:25:01.580713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x142b0f0 00:14:48.901 [2024-07-26 10:25:01.580860] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.901 BaseBdev2 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.901 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.161 10:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.161 [ 00:14:49.161 { 00:14:49.161 "name": "BaseBdev2", 00:14:49.161 "aliases": [ 00:14:49.161 "e3810599-ca45-4b55-b152-81441b8ecfc6" 00:14:49.161 ], 00:14:49.161 "product_name": "Malloc disk", 00:14:49.161 "block_size": 512, 00:14:49.161 "num_blocks": 65536, 00:14:49.161 "uuid": "e3810599-ca45-4b55-b152-81441b8ecfc6", 00:14:49.161 "assigned_rate_limits": { 00:14:49.161 "rw_ios_per_sec": 0, 00:14:49.161 "rw_mbytes_per_sec": 0, 00:14:49.161 "r_mbytes_per_sec": 0, 00:14:49.161 "w_mbytes_per_sec": 0 00:14:49.161 }, 00:14:49.161 "claimed": true, 00:14:49.161 "claim_type": "exclusive_write", 00:14:49.161 "zoned": false, 00:14:49.161 "supported_io_types": { 00:14:49.161 "read": true, 00:14:49.161 "write": true, 00:14:49.161 "unmap": true, 00:14:49.161 "flush": true, 00:14:49.161 "reset": true, 00:14:49.161 "nvme_admin": false, 00:14:49.161 "nvme_io": false, 00:14:49.161 "nvme_io_md": false, 00:14:49.161 "write_zeroes": true, 00:14:49.161 "zcopy": true, 00:14:49.161 "get_zone_info": false, 00:14:49.161 "zone_management": false, 00:14:49.161 "zone_append": false, 00:14:49.161 "compare": false, 00:14:49.161 "compare_and_write": false, 00:14:49.161 "abort": true, 00:14:49.161 "seek_hole": false, 00:14:49.161 "seek_data": false, 00:14:49.161 "copy": true, 00:14:49.161 "nvme_iov_md": false 00:14:49.161 }, 00:14:49.161 "memory_domains": [ 00:14:49.161 { 00:14:49.161 "dma_device_id": "system", 00:14:49.161 "dma_device_type": 1 00:14:49.161 }, 00:14:49.161 { 00:14:49.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.161 "dma_device_type": 2 00:14:49.161 } 00:14:49.161 ], 00:14:49.161 "driver_specific": {} 00:14:49.161 } 00:14:49.161 ] 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.161 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.420 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.420 "name": "Existed_Raid", 00:14:49.420 "uuid": "bd826ad9-40d3-4e7f-9702-4712018c77e1", 00:14:49.420 "strip_size_kb": 0, 00:14:49.420 "state": "online", 00:14:49.420 "raid_level": "raid1", 00:14:49.420 "superblock": false, 00:14:49.420 "num_base_bdevs": 2, 00:14:49.420 "num_base_bdevs_discovered": 2, 00:14:49.420 "num_base_bdevs_operational": 2, 00:14:49.420 "base_bdevs_list": [ 00:14:49.420 { 00:14:49.420 "name": "BaseBdev1", 00:14:49.420 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:49.420 "is_configured": true, 00:14:49.420 "data_offset": 0, 00:14:49.420 "data_size": 65536 00:14:49.420 }, 00:14:49.420 { 00:14:49.420 "name": "BaseBdev2", 00:14:49.420 "uuid": "e3810599-ca45-4b55-b152-81441b8ecfc6", 00:14:49.420 "is_configured": true, 00:14:49.420 "data_offset": 0, 00:14:49.420 "data_size": 65536 00:14:49.420 } 00:14:49.420 ] 00:14:49.420 }' 00:14:49.420 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.420 10:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:49.989 10:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:50.248 [2024-07-26 10:25:03.072502] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:50.248 "name": "Existed_Raid", 00:14:50.248 "aliases": [ 00:14:50.248 "bd826ad9-40d3-4e7f-9702-4712018c77e1" 00:14:50.248 ], 00:14:50.248 "product_name": "Raid Volume", 00:14:50.248 "block_size": 512, 00:14:50.248 "num_blocks": 65536, 00:14:50.248 "uuid": "bd826ad9-40d3-4e7f-9702-4712018c77e1", 00:14:50.248 "assigned_rate_limits": { 00:14:50.248 "rw_ios_per_sec": 0, 00:14:50.248 "rw_mbytes_per_sec": 0, 00:14:50.248 "r_mbytes_per_sec": 0, 00:14:50.248 "w_mbytes_per_sec": 0 00:14:50.248 }, 00:14:50.248 "claimed": false, 00:14:50.248 "zoned": false, 00:14:50.248 "supported_io_types": { 00:14:50.248 "read": true, 00:14:50.248 "write": true, 00:14:50.248 "unmap": false, 00:14:50.248 "flush": false, 00:14:50.248 "reset": true, 00:14:50.248 "nvme_admin": false, 00:14:50.248 "nvme_io": false, 00:14:50.248 "nvme_io_md": false, 00:14:50.248 "write_zeroes": true, 00:14:50.248 "zcopy": false, 00:14:50.248 "get_zone_info": false, 00:14:50.248 "zone_management": false, 00:14:50.248 "zone_append": false, 00:14:50.248 "compare": false, 00:14:50.248 "compare_and_write": false, 00:14:50.248 "abort": false, 00:14:50.248 "seek_hole": false, 00:14:50.248 "seek_data": false, 00:14:50.248 "copy": false, 00:14:50.248 "nvme_iov_md": false 00:14:50.248 }, 00:14:50.248 "memory_domains": [ 00:14:50.248 { 00:14:50.248 "dma_device_id": "system", 00:14:50.248 "dma_device_type": 1 00:14:50.248 }, 00:14:50.248 { 00:14:50.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.248 "dma_device_type": 2 00:14:50.248 }, 00:14:50.248 { 00:14:50.248 "dma_device_id": "system", 00:14:50.248 "dma_device_type": 1 00:14:50.248 }, 00:14:50.248 { 00:14:50.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.248 "dma_device_type": 2 00:14:50.248 } 00:14:50.248 ], 00:14:50.248 "driver_specific": { 00:14:50.248 "raid": { 00:14:50.248 "uuid": "bd826ad9-40d3-4e7f-9702-4712018c77e1", 00:14:50.248 "strip_size_kb": 0, 00:14:50.248 "state": "online", 00:14:50.248 "raid_level": "raid1", 00:14:50.248 "superblock": false, 00:14:50.248 "num_base_bdevs": 2, 00:14:50.248 "num_base_bdevs_discovered": 2, 00:14:50.248 "num_base_bdevs_operational": 2, 00:14:50.248 "base_bdevs_list": [ 00:14:50.248 { 00:14:50.248 "name": "BaseBdev1", 00:14:50.248 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:50.248 "is_configured": true, 00:14:50.248 "data_offset": 0, 00:14:50.248 "data_size": 65536 00:14:50.248 }, 00:14:50.248 { 00:14:50.248 "name": "BaseBdev2", 00:14:50.248 "uuid": "e3810599-ca45-4b55-b152-81441b8ecfc6", 00:14:50.248 "is_configured": true, 00:14:50.248 "data_offset": 0, 00:14:50.248 "data_size": 65536 00:14:50.248 } 00:14:50.248 ] 00:14:50.248 } 00:14:50.248 } 00:14:50.248 }' 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:50.248 BaseBdev2' 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:50.248 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.507 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.507 "name": "BaseBdev1", 00:14:50.507 "aliases": [ 00:14:50.507 "44a86de7-282e-4b7d-8090-2087e04bfb84" 00:14:50.507 ], 00:14:50.507 "product_name": "Malloc disk", 00:14:50.507 "block_size": 512, 00:14:50.507 "num_blocks": 65536, 00:14:50.507 "uuid": "44a86de7-282e-4b7d-8090-2087e04bfb84", 00:14:50.507 "assigned_rate_limits": { 00:14:50.507 "rw_ios_per_sec": 0, 00:14:50.507 "rw_mbytes_per_sec": 0, 00:14:50.507 "r_mbytes_per_sec": 0, 00:14:50.507 "w_mbytes_per_sec": 0 00:14:50.507 }, 00:14:50.507 "claimed": true, 00:14:50.507 "claim_type": "exclusive_write", 00:14:50.507 "zoned": false, 00:14:50.507 "supported_io_types": { 00:14:50.507 "read": true, 00:14:50.507 "write": true, 00:14:50.507 "unmap": true, 00:14:50.507 "flush": true, 00:14:50.507 "reset": true, 00:14:50.507 "nvme_admin": false, 00:14:50.507 "nvme_io": false, 00:14:50.507 "nvme_io_md": false, 00:14:50.507 "write_zeroes": true, 00:14:50.507 "zcopy": true, 00:14:50.507 "get_zone_info": false, 00:14:50.507 "zone_management": false, 00:14:50.507 "zone_append": false, 00:14:50.507 "compare": false, 00:14:50.507 "compare_and_write": false, 00:14:50.507 "abort": true, 00:14:50.507 "seek_hole": false, 00:14:50.507 "seek_data": false, 00:14:50.507 "copy": true, 00:14:50.507 "nvme_iov_md": false 00:14:50.507 }, 00:14:50.507 "memory_domains": [ 00:14:50.507 { 00:14:50.507 "dma_device_id": "system", 00:14:50.507 "dma_device_type": 1 00:14:50.507 }, 00:14:50.507 { 00:14:50.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.507 "dma_device_type": 2 00:14:50.507 } 00:14:50.507 ], 00:14:50.507 "driver_specific": {} 00:14:50.507 }' 00:14:50.507 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.507 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.766 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.054 "name": "BaseBdev2", 00:14:51.054 "aliases": [ 00:14:51.054 "e3810599-ca45-4b55-b152-81441b8ecfc6" 00:14:51.054 ], 00:14:51.054 "product_name": "Malloc disk", 00:14:51.054 "block_size": 512, 00:14:51.054 "num_blocks": 65536, 00:14:51.054 "uuid": "e3810599-ca45-4b55-b152-81441b8ecfc6", 00:14:51.054 "assigned_rate_limits": { 00:14:51.054 "rw_ios_per_sec": 0, 00:14:51.054 "rw_mbytes_per_sec": 0, 00:14:51.054 "r_mbytes_per_sec": 0, 00:14:51.054 "w_mbytes_per_sec": 0 00:14:51.054 }, 00:14:51.054 "claimed": true, 00:14:51.054 "claim_type": "exclusive_write", 00:14:51.054 "zoned": false, 00:14:51.054 "supported_io_types": { 00:14:51.054 "read": true, 00:14:51.054 "write": true, 00:14:51.054 "unmap": true, 00:14:51.054 "flush": true, 00:14:51.054 "reset": true, 00:14:51.054 "nvme_admin": false, 00:14:51.054 "nvme_io": false, 00:14:51.054 "nvme_io_md": false, 00:14:51.054 "write_zeroes": true, 00:14:51.054 "zcopy": true, 00:14:51.054 "get_zone_info": false, 00:14:51.054 "zone_management": false, 00:14:51.054 "zone_append": false, 00:14:51.054 "compare": false, 00:14:51.054 "compare_and_write": false, 00:14:51.054 "abort": true, 00:14:51.054 "seek_hole": false, 00:14:51.054 "seek_data": false, 00:14:51.054 "copy": true, 00:14:51.054 "nvme_iov_md": false 00:14:51.054 }, 00:14:51.054 "memory_domains": [ 00:14:51.054 { 00:14:51.054 "dma_device_id": "system", 00:14:51.054 "dma_device_type": 1 00:14:51.054 }, 00:14:51.054 { 00:14:51.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.054 "dma_device_type": 2 00:14:51.054 } 00:14:51.054 ], 00:14:51.054 "driver_specific": {} 00:14:51.054 }' 00:14:51.054 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.321 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.321 10:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.321 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:51.579 [2024-07-26 10:25:04.451952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.579 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.837 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.837 "name": "Existed_Raid", 00:14:51.837 "uuid": "bd826ad9-40d3-4e7f-9702-4712018c77e1", 00:14:51.837 "strip_size_kb": 0, 00:14:51.837 "state": "online", 00:14:51.837 "raid_level": "raid1", 00:14:51.837 "superblock": false, 00:14:51.837 "num_base_bdevs": 2, 00:14:51.837 "num_base_bdevs_discovered": 1, 00:14:51.837 "num_base_bdevs_operational": 1, 00:14:51.837 "base_bdevs_list": [ 00:14:51.837 { 00:14:51.837 "name": null, 00:14:51.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.837 "is_configured": false, 00:14:51.837 "data_offset": 0, 00:14:51.837 "data_size": 65536 00:14:51.837 }, 00:14:51.837 { 00:14:51.837 "name": "BaseBdev2", 00:14:51.837 "uuid": "e3810599-ca45-4b55-b152-81441b8ecfc6", 00:14:51.837 "is_configured": true, 00:14:51.837 "data_offset": 0, 00:14:51.837 "data_size": 65536 00:14:51.837 } 00:14:51.837 ] 00:14:51.837 }' 00:14:51.837 10:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.837 10:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.403 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:52.403 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:52.403 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.403 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:52.661 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:52.661 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.661 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:52.919 [2024-07-26 10:25:05.712312] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.919 [2024-07-26 10:25:05.712390] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.919 [2024-07-26 10:25:05.722720] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.919 [2024-07-26 10:25:05.722751] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.919 [2024-07-26 10:25:05.722762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x142b0f0 name Existed_Raid, state offline 00:14:52.919 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:52.919 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:52.919 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.919 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3361001 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3361001 ']' 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3361001 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.177 10:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3361001 00:14:53.177 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.177 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.177 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3361001' 00:14:53.177 killing process with pid 3361001 00:14:53.177 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3361001 00:14:53.177 [2024-07-26 10:25:06.028425] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.177 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3361001 00:14:53.177 [2024-07-26 10:25:06.029291] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:53.436 00:14:53.436 real 0m10.037s 00:14:53.436 user 0m17.774s 00:14:53.436 sys 0m1.965s 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.436 ************************************ 00:14:53.436 END TEST raid_state_function_test 00:14:53.436 ************************************ 00:14:53.436 10:25:06 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:53.436 10:25:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:53.436 10:25:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.436 10:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.436 ************************************ 00:14:53.436 START TEST raid_state_function_test_sb 00:14:53.436 ************************************ 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:53.436 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3362958 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3362958' 00:14:53.437 Process raid pid: 3362958 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3362958 /var/tmp/spdk-raid.sock 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3362958 ']' 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.437 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 [2024-07-26 10:25:06.354314] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:14:53.695 [2024-07-26 10:25:06.354376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.695 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:53.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:53.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.696 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:53.696 [2024-07-26 10:25:06.490121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.696 [2024-07-26 10:25:06.533108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.696 [2024-07-26 10:25:06.596644] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.696 [2024-07-26 10:25:06.596680] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:54.629 [2024-07-26 10:25:07.465530] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.629 [2024-07-26 10:25:07.465569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.629 [2024-07-26 10:25:07.465580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.629 [2024-07-26 10:25:07.465590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.629 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.887 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.887 "name": "Existed_Raid", 00:14:54.887 "uuid": "85e166db-019a-483c-95b3-a503981843c2", 00:14:54.887 "strip_size_kb": 0, 00:14:54.887 "state": "configuring", 00:14:54.887 "raid_level": "raid1", 00:14:54.887 "superblock": true, 00:14:54.887 "num_base_bdevs": 2, 00:14:54.887 "num_base_bdevs_discovered": 0, 00:14:54.887 "num_base_bdevs_operational": 2, 00:14:54.888 "base_bdevs_list": [ 00:14:54.888 { 00:14:54.888 "name": "BaseBdev1", 00:14:54.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.888 "is_configured": false, 00:14:54.888 "data_offset": 0, 00:14:54.888 "data_size": 0 00:14:54.888 }, 00:14:54.888 { 00:14:54.888 "name": "BaseBdev2", 00:14:54.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.888 "is_configured": false, 00:14:54.888 "data_offset": 0, 00:14:54.888 "data_size": 0 00:14:54.888 } 00:14:54.888 ] 00:14:54.888 }' 00:14:54.888 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.888 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.454 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:55.712 [2024-07-26 10:25:08.496110] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.712 [2024-07-26 10:25:08.496145] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa6ce0 name Existed_Raid, state configuring 00:14:55.712 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:55.970 [2024-07-26 10:25:08.724715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.970 [2024-07-26 10:25:08.724741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.970 [2024-07-26 10:25:08.724750] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.970 [2024-07-26 10:25:08.724760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.970 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.228 [2024-07-26 10:25:08.962715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.228 BaseBdev1 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:56.228 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.486 10:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.756 [ 00:14:56.756 { 00:14:56.756 "name": "BaseBdev1", 00:14:56.756 "aliases": [ 00:14:56.756 "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44" 00:14:56.756 ], 00:14:56.756 "product_name": "Malloc disk", 00:14:56.756 "block_size": 512, 00:14:56.756 "num_blocks": 65536, 00:14:56.756 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:14:56.756 "assigned_rate_limits": { 00:14:56.756 "rw_ios_per_sec": 0, 00:14:56.756 "rw_mbytes_per_sec": 0, 00:14:56.756 "r_mbytes_per_sec": 0, 00:14:56.756 "w_mbytes_per_sec": 0 00:14:56.756 }, 00:14:56.756 "claimed": true, 00:14:56.756 "claim_type": "exclusive_write", 00:14:56.756 "zoned": false, 00:14:56.756 "supported_io_types": { 00:14:56.756 "read": true, 00:14:56.756 "write": true, 00:14:56.756 "unmap": true, 00:14:56.756 "flush": true, 00:14:56.756 "reset": true, 00:14:56.756 "nvme_admin": false, 00:14:56.756 "nvme_io": false, 00:14:56.756 "nvme_io_md": false, 00:14:56.756 "write_zeroes": true, 00:14:56.756 "zcopy": true, 00:14:56.756 "get_zone_info": false, 00:14:56.756 "zone_management": false, 00:14:56.756 "zone_append": false, 00:14:56.756 "compare": false, 00:14:56.756 "compare_and_write": false, 00:14:56.756 "abort": true, 00:14:56.756 "seek_hole": false, 00:14:56.756 "seek_data": false, 00:14:56.756 "copy": true, 00:14:56.756 "nvme_iov_md": false 00:14:56.756 }, 00:14:56.756 "memory_domains": [ 00:14:56.756 { 00:14:56.756 "dma_device_id": "system", 00:14:56.756 "dma_device_type": 1 00:14:56.756 }, 00:14:56.756 { 00:14:56.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.756 "dma_device_type": 2 00:14:56.756 } 00:14:56.756 ], 00:14:56.756 "driver_specific": {} 00:14:56.756 } 00:14:56.756 ] 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.756 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.015 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.015 "name": "Existed_Raid", 00:14:57.015 "uuid": "3c245f33-9549-435b-bb79-e24a0e51e031", 00:14:57.015 "strip_size_kb": 0, 00:14:57.015 "state": "configuring", 00:14:57.015 "raid_level": "raid1", 00:14:57.015 "superblock": true, 00:14:57.015 "num_base_bdevs": 2, 00:14:57.015 "num_base_bdevs_discovered": 1, 00:14:57.015 "num_base_bdevs_operational": 2, 00:14:57.015 "base_bdevs_list": [ 00:14:57.015 { 00:14:57.015 "name": "BaseBdev1", 00:14:57.015 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:14:57.015 "is_configured": true, 00:14:57.015 "data_offset": 2048, 00:14:57.015 "data_size": 63488 00:14:57.015 }, 00:14:57.015 { 00:14:57.015 "name": "BaseBdev2", 00:14:57.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.015 "is_configured": false, 00:14:57.015 "data_offset": 0, 00:14:57.015 "data_size": 0 00:14:57.015 } 00:14:57.015 ] 00:14:57.015 }' 00:14:57.015 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.015 10:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:57.582 [2024-07-26 10:25:10.462658] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.582 [2024-07-26 10:25:10.462700] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa6610 name Existed_Raid, state configuring 00:14:57.582 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.840 [2024-07-26 10:25:10.691287] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.840 [2024-07-26 10:25:10.692622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.840 [2024-07-26 10:25:10.692654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.840 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.098 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.098 "name": "Existed_Raid", 00:14:58.098 "uuid": "a9bd1565-adf7-420d-b810-dd8d791ad04e", 00:14:58.098 "strip_size_kb": 0, 00:14:58.098 "state": "configuring", 00:14:58.098 "raid_level": "raid1", 00:14:58.098 "superblock": true, 00:14:58.098 "num_base_bdevs": 2, 00:14:58.098 "num_base_bdevs_discovered": 1, 00:14:58.098 "num_base_bdevs_operational": 2, 00:14:58.098 "base_bdevs_list": [ 00:14:58.098 { 00:14:58.098 "name": "BaseBdev1", 00:14:58.098 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 }, 00:14:58.098 { 00:14:58.098 "name": "BaseBdev2", 00:14:58.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.098 "is_configured": false, 00:14:58.098 "data_offset": 0, 00:14:58.098 "data_size": 0 00:14:58.098 } 00:14:58.098 ] 00:14:58.098 }' 00:14:58.098 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.098 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.665 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.925 [2024-07-26 10:25:11.673067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.925 [2024-07-26 10:25:11.673211] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c590f0 00:14:58.925 [2024-07-26 10:25:11.673224] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:58.925 [2024-07-26 10:25:11.673390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aa59a0 00:14:58.925 [2024-07-26 10:25:11.673505] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c590f0 00:14:58.925 [2024-07-26 10:25:11.673515] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c590f0 00:14:58.925 [2024-07-26 10:25:11.673602] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.925 BaseBdev2 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.925 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.184 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.442 [ 00:14:59.442 { 00:14:59.442 "name": "BaseBdev2", 00:14:59.442 "aliases": [ 00:14:59.442 "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85" 00:14:59.442 ], 00:14:59.442 "product_name": "Malloc disk", 00:14:59.442 "block_size": 512, 00:14:59.442 "num_blocks": 65536, 00:14:59.442 "uuid": "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85", 00:14:59.442 "assigned_rate_limits": { 00:14:59.442 "rw_ios_per_sec": 0, 00:14:59.442 "rw_mbytes_per_sec": 0, 00:14:59.442 "r_mbytes_per_sec": 0, 00:14:59.442 "w_mbytes_per_sec": 0 00:14:59.442 }, 00:14:59.442 "claimed": true, 00:14:59.442 "claim_type": "exclusive_write", 00:14:59.442 "zoned": false, 00:14:59.442 "supported_io_types": { 00:14:59.442 "read": true, 00:14:59.442 "write": true, 00:14:59.442 "unmap": true, 00:14:59.442 "flush": true, 00:14:59.442 "reset": true, 00:14:59.442 "nvme_admin": false, 00:14:59.442 "nvme_io": false, 00:14:59.442 "nvme_io_md": false, 00:14:59.442 "write_zeroes": true, 00:14:59.442 "zcopy": true, 00:14:59.442 "get_zone_info": false, 00:14:59.442 "zone_management": false, 00:14:59.442 "zone_append": false, 00:14:59.442 "compare": false, 00:14:59.442 "compare_and_write": false, 00:14:59.442 "abort": true, 00:14:59.442 "seek_hole": false, 00:14:59.442 "seek_data": false, 00:14:59.442 "copy": true, 00:14:59.442 "nvme_iov_md": false 00:14:59.442 }, 00:14:59.442 "memory_domains": [ 00:14:59.442 { 00:14:59.442 "dma_device_id": "system", 00:14:59.442 "dma_device_type": 1 00:14:59.442 }, 00:14:59.442 { 00:14:59.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.442 "dma_device_type": 2 00:14:59.442 } 00:14:59.442 ], 00:14:59.442 "driver_specific": {} 00:14:59.442 } 00:14:59.442 ] 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.442 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.700 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.700 "name": "Existed_Raid", 00:14:59.700 "uuid": "a9bd1565-adf7-420d-b810-dd8d791ad04e", 00:14:59.700 "strip_size_kb": 0, 00:14:59.700 "state": "online", 00:14:59.700 "raid_level": "raid1", 00:14:59.700 "superblock": true, 00:14:59.700 "num_base_bdevs": 2, 00:14:59.700 "num_base_bdevs_discovered": 2, 00:14:59.700 "num_base_bdevs_operational": 2, 00:14:59.700 "base_bdevs_list": [ 00:14:59.700 { 00:14:59.700 "name": "BaseBdev1", 00:14:59.700 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:14:59.700 "is_configured": true, 00:14:59.700 "data_offset": 2048, 00:14:59.700 "data_size": 63488 00:14:59.700 }, 00:14:59.700 { 00:14:59.700 "name": "BaseBdev2", 00:14:59.700 "uuid": "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85", 00:14:59.700 "is_configured": true, 00:14:59.700 "data_offset": 2048, 00:14:59.700 "data_size": 63488 00:14:59.700 } 00:14:59.700 ] 00:14:59.700 }' 00:14:59.700 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.700 10:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:00.267 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:00.267 [2024-07-26 10:25:13.081126] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:00.267 "name": "Existed_Raid", 00:15:00.267 "aliases": [ 00:15:00.267 "a9bd1565-adf7-420d-b810-dd8d791ad04e" 00:15:00.267 ], 00:15:00.267 "product_name": "Raid Volume", 00:15:00.267 "block_size": 512, 00:15:00.267 "num_blocks": 63488, 00:15:00.267 "uuid": "a9bd1565-adf7-420d-b810-dd8d791ad04e", 00:15:00.267 "assigned_rate_limits": { 00:15:00.267 "rw_ios_per_sec": 0, 00:15:00.267 "rw_mbytes_per_sec": 0, 00:15:00.267 "r_mbytes_per_sec": 0, 00:15:00.267 "w_mbytes_per_sec": 0 00:15:00.267 }, 00:15:00.267 "claimed": false, 00:15:00.267 "zoned": false, 00:15:00.267 "supported_io_types": { 00:15:00.267 "read": true, 00:15:00.267 "write": true, 00:15:00.267 "unmap": false, 00:15:00.267 "flush": false, 00:15:00.267 "reset": true, 00:15:00.267 "nvme_admin": false, 00:15:00.267 "nvme_io": false, 00:15:00.267 "nvme_io_md": false, 00:15:00.267 "write_zeroes": true, 00:15:00.267 "zcopy": false, 00:15:00.267 "get_zone_info": false, 00:15:00.267 "zone_management": false, 00:15:00.267 "zone_append": false, 00:15:00.267 "compare": false, 00:15:00.267 "compare_and_write": false, 00:15:00.267 "abort": false, 00:15:00.267 "seek_hole": false, 00:15:00.267 "seek_data": false, 00:15:00.267 "copy": false, 00:15:00.267 "nvme_iov_md": false 00:15:00.267 }, 00:15:00.267 "memory_domains": [ 00:15:00.267 { 00:15:00.267 "dma_device_id": "system", 00:15:00.267 "dma_device_type": 1 00:15:00.267 }, 00:15:00.267 { 00:15:00.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.267 "dma_device_type": 2 00:15:00.267 }, 00:15:00.267 { 00:15:00.267 "dma_device_id": "system", 00:15:00.267 "dma_device_type": 1 00:15:00.267 }, 00:15:00.267 { 00:15:00.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.267 "dma_device_type": 2 00:15:00.267 } 00:15:00.267 ], 00:15:00.267 "driver_specific": { 00:15:00.267 "raid": { 00:15:00.267 "uuid": "a9bd1565-adf7-420d-b810-dd8d791ad04e", 00:15:00.267 "strip_size_kb": 0, 00:15:00.267 "state": "online", 00:15:00.267 "raid_level": "raid1", 00:15:00.267 "superblock": true, 00:15:00.267 "num_base_bdevs": 2, 00:15:00.267 "num_base_bdevs_discovered": 2, 00:15:00.267 "num_base_bdevs_operational": 2, 00:15:00.267 "base_bdevs_list": [ 00:15:00.267 { 00:15:00.267 "name": "BaseBdev1", 00:15:00.267 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:15:00.267 "is_configured": true, 00:15:00.267 "data_offset": 2048, 00:15:00.267 "data_size": 63488 00:15:00.267 }, 00:15:00.267 { 00:15:00.267 "name": "BaseBdev2", 00:15:00.267 "uuid": "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85", 00:15:00.267 "is_configured": true, 00:15:00.267 "data_offset": 2048, 00:15:00.267 "data_size": 63488 00:15:00.267 } 00:15:00.267 ] 00:15:00.267 } 00:15:00.267 } 00:15:00.267 }' 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:00.267 BaseBdev2' 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:00.267 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.526 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.526 "name": "BaseBdev1", 00:15:00.526 "aliases": [ 00:15:00.526 "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44" 00:15:00.526 ], 00:15:00.526 "product_name": "Malloc disk", 00:15:00.526 "block_size": 512, 00:15:00.526 "num_blocks": 65536, 00:15:00.526 "uuid": "5ee7fa02-b55a-4ca0-9ca2-f3d2284a9a44", 00:15:00.526 "assigned_rate_limits": { 00:15:00.526 "rw_ios_per_sec": 0, 00:15:00.526 "rw_mbytes_per_sec": 0, 00:15:00.526 "r_mbytes_per_sec": 0, 00:15:00.526 "w_mbytes_per_sec": 0 00:15:00.526 }, 00:15:00.526 "claimed": true, 00:15:00.526 "claim_type": "exclusive_write", 00:15:00.526 "zoned": false, 00:15:00.526 "supported_io_types": { 00:15:00.526 "read": true, 00:15:00.526 "write": true, 00:15:00.526 "unmap": true, 00:15:00.526 "flush": true, 00:15:00.526 "reset": true, 00:15:00.526 "nvme_admin": false, 00:15:00.526 "nvme_io": false, 00:15:00.526 "nvme_io_md": false, 00:15:00.526 "write_zeroes": true, 00:15:00.526 "zcopy": true, 00:15:00.526 "get_zone_info": false, 00:15:00.526 "zone_management": false, 00:15:00.526 "zone_append": false, 00:15:00.526 "compare": false, 00:15:00.526 "compare_and_write": false, 00:15:00.526 "abort": true, 00:15:00.526 "seek_hole": false, 00:15:00.526 "seek_data": false, 00:15:00.526 "copy": true, 00:15:00.526 "nvme_iov_md": false 00:15:00.526 }, 00:15:00.526 "memory_domains": [ 00:15:00.526 { 00:15:00.526 "dma_device_id": "system", 00:15:00.526 "dma_device_type": 1 00:15:00.526 }, 00:15:00.526 { 00:15:00.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.526 "dma_device_type": 2 00:15:00.526 } 00:15:00.526 ], 00:15:00.526 "driver_specific": {} 00:15:00.526 }' 00:15:00.526 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.526 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.785 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.044 "name": "BaseBdev2", 00:15:01.044 "aliases": [ 00:15:01.044 "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85" 00:15:01.044 ], 00:15:01.044 "product_name": "Malloc disk", 00:15:01.044 "block_size": 512, 00:15:01.044 "num_blocks": 65536, 00:15:01.044 "uuid": "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85", 00:15:01.044 "assigned_rate_limits": { 00:15:01.044 "rw_ios_per_sec": 0, 00:15:01.044 "rw_mbytes_per_sec": 0, 00:15:01.044 "r_mbytes_per_sec": 0, 00:15:01.044 "w_mbytes_per_sec": 0 00:15:01.044 }, 00:15:01.044 "claimed": true, 00:15:01.044 "claim_type": "exclusive_write", 00:15:01.044 "zoned": false, 00:15:01.044 "supported_io_types": { 00:15:01.044 "read": true, 00:15:01.044 "write": true, 00:15:01.044 "unmap": true, 00:15:01.044 "flush": true, 00:15:01.044 "reset": true, 00:15:01.044 "nvme_admin": false, 00:15:01.044 "nvme_io": false, 00:15:01.044 "nvme_io_md": false, 00:15:01.044 "write_zeroes": true, 00:15:01.044 "zcopy": true, 00:15:01.044 "get_zone_info": false, 00:15:01.044 "zone_management": false, 00:15:01.044 "zone_append": false, 00:15:01.044 "compare": false, 00:15:01.044 "compare_and_write": false, 00:15:01.044 "abort": true, 00:15:01.044 "seek_hole": false, 00:15:01.044 "seek_data": false, 00:15:01.044 "copy": true, 00:15:01.044 "nvme_iov_md": false 00:15:01.044 }, 00:15:01.044 "memory_domains": [ 00:15:01.044 { 00:15:01.044 "dma_device_id": "system", 00:15:01.044 "dma_device_type": 1 00:15:01.044 }, 00:15:01.044 { 00:15:01.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.044 "dma_device_type": 2 00:15:01.044 } 00:15:01.044 ], 00:15:01.044 "driver_specific": {} 00:15:01.044 }' 00:15:01.044 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.302 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.302 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.560 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.560 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.560 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:01.817 [2024-07-26 10:25:14.464568] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.818 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.076 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.076 "name": "Existed_Raid", 00:15:02.076 "uuid": "a9bd1565-adf7-420d-b810-dd8d791ad04e", 00:15:02.076 "strip_size_kb": 0, 00:15:02.076 "state": "online", 00:15:02.076 "raid_level": "raid1", 00:15:02.076 "superblock": true, 00:15:02.076 "num_base_bdevs": 2, 00:15:02.076 "num_base_bdevs_discovered": 1, 00:15:02.076 "num_base_bdevs_operational": 1, 00:15:02.076 "base_bdevs_list": [ 00:15:02.076 { 00:15:02.076 "name": null, 00:15:02.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.076 "is_configured": false, 00:15:02.076 "data_offset": 2048, 00:15:02.076 "data_size": 63488 00:15:02.076 }, 00:15:02.076 { 00:15:02.076 "name": "BaseBdev2", 00:15:02.076 "uuid": "3cf6ed4c-33e7-4ae5-aadc-82120f2cba85", 00:15:02.076 "is_configured": true, 00:15:02.076 "data_offset": 2048, 00:15:02.076 "data_size": 63488 00:15:02.076 } 00:15:02.076 ] 00:15:02.076 }' 00:15:02.076 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.076 10:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.644 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:02.902 [2024-07-26 10:25:15.724923] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.902 [2024-07-26 10:25:15.725002] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.902 [2024-07-26 10:25:15.735313] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.902 [2024-07-26 10:25:15.735344] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.902 [2024-07-26 10:25:15.735354] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c590f0 name Existed_Raid, state offline 00:15:02.902 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:02.902 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:02.902 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.902 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3362958 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3362958 ']' 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3362958 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.161 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3362958 00:15:03.161 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.161 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.161 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3362958' 00:15:03.161 killing process with pid 3362958 00:15:03.161 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3362958 00:15:03.161 [2024-07-26 10:25:16.039321] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.161 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3362958 00:15:03.161 [2024-07-26 10:25:16.040168] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.420 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:03.420 00:15:03.420 real 0m9.937s 00:15:03.420 user 0m17.658s 00:15:03.420 sys 0m1.864s 00:15:03.420 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:03.420 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.420 ************************************ 00:15:03.420 END TEST raid_state_function_test_sb 00:15:03.420 ************************************ 00:15:03.420 10:25:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:03.420 10:25:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:03.420 10:25:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.420 10:25:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.420 ************************************ 00:15:03.420 START TEST raid_superblock_test 00:15:03.420 ************************************ 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3364872 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3364872 /var/tmp/spdk-raid.sock 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3364872 ']' 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:03.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.420 10:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 [2024-07-26 10:25:16.362886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:15:03.678 [2024-07-26 10:25:16.362932] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364872 ] 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:03.678 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.678 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:03.679 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:03.679 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:03.679 [2024-07-26 10:25:16.484214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.679 [2024-07-26 10:25:16.528417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.937 [2024-07-26 10:25:16.585835] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.937 [2024-07-26 10:25:16.585863] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:04.537 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:04.795 malloc1 00:15:04.795 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.054 [2024-07-26 10:25:17.707772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.054 [2024-07-26 10:25:17.707815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.054 [2024-07-26 10:25:17.707833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e2270 00:15:05.054 [2024-07-26 10:25:17.707844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.054 [2024-07-26 10:25:17.709216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.054 [2024-07-26 10:25:17.709243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.054 pt1 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.054 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:05.054 malloc2 00:15:05.313 10:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.313 [2024-07-26 10:25:18.173231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.313 [2024-07-26 10:25:18.173272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.313 [2024-07-26 10:25:18.173287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x289e2f0 00:15:05.313 [2024-07-26 10:25:18.173299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.313 [2024-07-26 10:25:18.174649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.313 [2024-07-26 10:25:18.174675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.313 pt2 00:15:05.313 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:05.313 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:05.313 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:05.572 [2024-07-26 10:25:18.401840] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.572 [2024-07-26 10:25:18.402997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.572 [2024-07-26 10:25:18.403114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2867f20 00:15:05.572 [2024-07-26 10:25:18.403125] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.572 [2024-07-26 10:25:18.403298] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2731f70 00:15:05.572 [2024-07-26 10:25:18.403412] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2867f20 00:15:05.572 [2024-07-26 10:25:18.403421] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2867f20 00:15:05.572 [2024-07-26 10:25:18.403516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.572 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.830 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.830 "name": "raid_bdev1", 00:15:05.830 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:05.830 "strip_size_kb": 0, 00:15:05.830 "state": "online", 00:15:05.830 "raid_level": "raid1", 00:15:05.830 "superblock": true, 00:15:05.830 "num_base_bdevs": 2, 00:15:05.830 "num_base_bdevs_discovered": 2, 00:15:05.830 "num_base_bdevs_operational": 2, 00:15:05.830 "base_bdevs_list": [ 00:15:05.830 { 00:15:05.830 "name": "pt1", 00:15:05.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.830 "is_configured": true, 00:15:05.830 "data_offset": 2048, 00:15:05.830 "data_size": 63488 00:15:05.830 }, 00:15:05.830 { 00:15:05.830 "name": "pt2", 00:15:05.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.830 "is_configured": true, 00:15:05.830 "data_offset": 2048, 00:15:05.830 "data_size": 63488 00:15:05.830 } 00:15:05.830 ] 00:15:05.830 }' 00:15:05.830 10:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.830 10:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:06.398 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:06.657 [2024-07-26 10:25:19.408707] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.657 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:06.657 "name": "raid_bdev1", 00:15:06.657 "aliases": [ 00:15:06.657 "b8306420-52cd-4701-9529-29972e7bb431" 00:15:06.657 ], 00:15:06.657 "product_name": "Raid Volume", 00:15:06.657 "block_size": 512, 00:15:06.657 "num_blocks": 63488, 00:15:06.657 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:06.657 "assigned_rate_limits": { 00:15:06.657 "rw_ios_per_sec": 0, 00:15:06.657 "rw_mbytes_per_sec": 0, 00:15:06.657 "r_mbytes_per_sec": 0, 00:15:06.657 "w_mbytes_per_sec": 0 00:15:06.657 }, 00:15:06.657 "claimed": false, 00:15:06.657 "zoned": false, 00:15:06.657 "supported_io_types": { 00:15:06.657 "read": true, 00:15:06.657 "write": true, 00:15:06.657 "unmap": false, 00:15:06.657 "flush": false, 00:15:06.657 "reset": true, 00:15:06.657 "nvme_admin": false, 00:15:06.657 "nvme_io": false, 00:15:06.657 "nvme_io_md": false, 00:15:06.657 "write_zeroes": true, 00:15:06.657 "zcopy": false, 00:15:06.657 "get_zone_info": false, 00:15:06.657 "zone_management": false, 00:15:06.657 "zone_append": false, 00:15:06.657 "compare": false, 00:15:06.657 "compare_and_write": false, 00:15:06.657 "abort": false, 00:15:06.657 "seek_hole": false, 00:15:06.657 "seek_data": false, 00:15:06.657 "copy": false, 00:15:06.657 "nvme_iov_md": false 00:15:06.657 }, 00:15:06.657 "memory_domains": [ 00:15:06.657 { 00:15:06.657 "dma_device_id": "system", 00:15:06.657 "dma_device_type": 1 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.657 "dma_device_type": 2 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "dma_device_id": "system", 00:15:06.657 "dma_device_type": 1 00:15:06.657 }, 00:15:06.657 { 00:15:06.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.657 "dma_device_type": 2 00:15:06.657 } 00:15:06.657 ], 00:15:06.658 "driver_specific": { 00:15:06.658 "raid": { 00:15:06.658 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:06.658 "strip_size_kb": 0, 00:15:06.658 "state": "online", 00:15:06.658 "raid_level": "raid1", 00:15:06.658 "superblock": true, 00:15:06.658 "num_base_bdevs": 2, 00:15:06.658 "num_base_bdevs_discovered": 2, 00:15:06.658 "num_base_bdevs_operational": 2, 00:15:06.658 "base_bdevs_list": [ 00:15:06.658 { 00:15:06.658 "name": "pt1", 00:15:06.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.658 "is_configured": true, 00:15:06.658 "data_offset": 2048, 00:15:06.658 "data_size": 63488 00:15:06.658 }, 00:15:06.658 { 00:15:06.658 "name": "pt2", 00:15:06.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.658 "is_configured": true, 00:15:06.658 "data_offset": 2048, 00:15:06.658 "data_size": 63488 00:15:06.658 } 00:15:06.658 ] 00:15:06.658 } 00:15:06.658 } 00:15:06.658 }' 00:15:06.658 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.658 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:06.658 pt2' 00:15:06.658 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.658 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:06.658 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.917 "name": "pt1", 00:15:06.917 "aliases": [ 00:15:06.917 "00000000-0000-0000-0000-000000000001" 00:15:06.917 ], 00:15:06.917 "product_name": "passthru", 00:15:06.917 "block_size": 512, 00:15:06.917 "num_blocks": 65536, 00:15:06.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.917 "assigned_rate_limits": { 00:15:06.917 "rw_ios_per_sec": 0, 00:15:06.917 "rw_mbytes_per_sec": 0, 00:15:06.917 "r_mbytes_per_sec": 0, 00:15:06.917 "w_mbytes_per_sec": 0 00:15:06.917 }, 00:15:06.917 "claimed": true, 00:15:06.917 "claim_type": "exclusive_write", 00:15:06.917 "zoned": false, 00:15:06.917 "supported_io_types": { 00:15:06.917 "read": true, 00:15:06.917 "write": true, 00:15:06.917 "unmap": true, 00:15:06.917 "flush": true, 00:15:06.917 "reset": true, 00:15:06.917 "nvme_admin": false, 00:15:06.917 "nvme_io": false, 00:15:06.917 "nvme_io_md": false, 00:15:06.917 "write_zeroes": true, 00:15:06.917 "zcopy": true, 00:15:06.917 "get_zone_info": false, 00:15:06.917 "zone_management": false, 00:15:06.917 "zone_append": false, 00:15:06.917 "compare": false, 00:15:06.917 "compare_and_write": false, 00:15:06.917 "abort": true, 00:15:06.917 "seek_hole": false, 00:15:06.917 "seek_data": false, 00:15:06.917 "copy": true, 00:15:06.917 "nvme_iov_md": false 00:15:06.917 }, 00:15:06.917 "memory_domains": [ 00:15:06.917 { 00:15:06.917 "dma_device_id": "system", 00:15:06.917 "dma_device_type": 1 00:15:06.917 }, 00:15:06.917 { 00:15:06.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.917 "dma_device_type": 2 00:15:06.917 } 00:15:06.917 ], 00:15:06.917 "driver_specific": { 00:15:06.917 "passthru": { 00:15:06.917 "name": "pt1", 00:15:06.917 "base_bdev_name": "malloc1" 00:15:06.917 } 00:15:06.917 } 00:15:06.917 }' 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.917 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.176 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.176 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.176 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.176 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:07.176 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.435 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.435 "name": "pt2", 00:15:07.435 "aliases": [ 00:15:07.435 "00000000-0000-0000-0000-000000000002" 00:15:07.435 ], 00:15:07.435 "product_name": "passthru", 00:15:07.435 "block_size": 512, 00:15:07.435 "num_blocks": 65536, 00:15:07.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.435 "assigned_rate_limits": { 00:15:07.435 "rw_ios_per_sec": 0, 00:15:07.435 "rw_mbytes_per_sec": 0, 00:15:07.435 "r_mbytes_per_sec": 0, 00:15:07.435 "w_mbytes_per_sec": 0 00:15:07.435 }, 00:15:07.435 "claimed": true, 00:15:07.435 "claim_type": "exclusive_write", 00:15:07.435 "zoned": false, 00:15:07.435 "supported_io_types": { 00:15:07.435 "read": true, 00:15:07.435 "write": true, 00:15:07.435 "unmap": true, 00:15:07.435 "flush": true, 00:15:07.435 "reset": true, 00:15:07.435 "nvme_admin": false, 00:15:07.435 "nvme_io": false, 00:15:07.435 "nvme_io_md": false, 00:15:07.435 "write_zeroes": true, 00:15:07.435 "zcopy": true, 00:15:07.435 "get_zone_info": false, 00:15:07.435 "zone_management": false, 00:15:07.435 "zone_append": false, 00:15:07.435 "compare": false, 00:15:07.435 "compare_and_write": false, 00:15:07.435 "abort": true, 00:15:07.435 "seek_hole": false, 00:15:07.435 "seek_data": false, 00:15:07.435 "copy": true, 00:15:07.435 "nvme_iov_md": false 00:15:07.435 }, 00:15:07.435 "memory_domains": [ 00:15:07.435 { 00:15:07.435 "dma_device_id": "system", 00:15:07.435 "dma_device_type": 1 00:15:07.435 }, 00:15:07.435 { 00:15:07.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.435 "dma_device_type": 2 00:15:07.435 } 00:15:07.435 ], 00:15:07.435 "driver_specific": { 00:15:07.435 "passthru": { 00:15:07.435 "name": "pt2", 00:15:07.435 "base_bdev_name": "malloc2" 00:15:07.435 } 00:15:07.435 } 00:15:07.435 }' 00:15:07.435 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.435 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.695 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:07.954 [2024-07-26 10:25:20.804385] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.954 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b8306420-52cd-4701-9529-29972e7bb431 00:15:07.954 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z b8306420-52cd-4701-9529-29972e7bb431 ']' 00:15:07.954 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:08.213 [2024-07-26 10:25:21.040776] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.213 [2024-07-26 10:25:21.040798] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.213 [2024-07-26 10:25:21.040852] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.214 [2024-07-26 10:25:21.040901] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.214 [2024-07-26 10:25:21.040912] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2867f20 name raid_bdev1, state offline 00:15:08.214 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.214 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:08.473 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:08.473 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:08.473 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.473 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:08.732 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.732 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:08.992 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:08.992 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:09.251 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:09.511 [2024-07-26 10:25:22.163680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.511 [2024-07-26 10:25:22.164942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.511 [2024-07-26 10:25:22.164994] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.511 [2024-07-26 10:25:22.165032] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.511 [2024-07-26 10:25:22.165049] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.511 [2024-07-26 10:25:22.165058] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x289da60 name raid_bdev1, state configuring 00:15:09.511 request: 00:15:09.511 { 00:15:09.511 "name": "raid_bdev1", 00:15:09.511 "raid_level": "raid1", 00:15:09.511 "base_bdevs": [ 00:15:09.511 "malloc1", 00:15:09.511 "malloc2" 00:15:09.511 ], 00:15:09.511 "superblock": false, 00:15:09.511 "method": "bdev_raid_create", 00:15:09.511 "req_id": 1 00:15:09.511 } 00:15:09.511 Got JSON-RPC error response 00:15:09.511 response: 00:15:09.511 { 00:15:09.511 "code": -17, 00:15:09.511 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.511 } 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:09.511 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.770 [2024-07-26 10:25:22.620832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.770 [2024-07-26 10:25:22.620870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.770 [2024-07-26 10:25:22.620886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e33a0 00:15:09.770 [2024-07-26 10:25:22.620898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.770 [2024-07-26 10:25:22.622322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.770 [2024-07-26 10:25:22.622350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.770 [2024-07-26 10:25:22.622417] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.770 [2024-07-26 10:25:22.622440] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.770 pt1 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.770 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.029 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.029 "name": "raid_bdev1", 00:15:10.029 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:10.029 "strip_size_kb": 0, 00:15:10.029 "state": "configuring", 00:15:10.029 "raid_level": "raid1", 00:15:10.029 "superblock": true, 00:15:10.029 "num_base_bdevs": 2, 00:15:10.029 "num_base_bdevs_discovered": 1, 00:15:10.029 "num_base_bdevs_operational": 2, 00:15:10.029 "base_bdevs_list": [ 00:15:10.029 { 00:15:10.029 "name": "pt1", 00:15:10.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.029 "is_configured": true, 00:15:10.029 "data_offset": 2048, 00:15:10.029 "data_size": 63488 00:15:10.029 }, 00:15:10.029 { 00:15:10.029 "name": null, 00:15:10.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.029 "is_configured": false, 00:15:10.029 "data_offset": 2048, 00:15:10.029 "data_size": 63488 00:15:10.029 } 00:15:10.029 ] 00:15:10.029 }' 00:15:10.029 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.029 10:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.598 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:10.598 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:10.598 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:10.598 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.856 [2024-07-26 10:25:23.651544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.856 [2024-07-26 10:25:23.651589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.856 [2024-07-26 10:25:23.651606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e3a50 00:15:10.856 [2024-07-26 10:25:23.651618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.856 [2024-07-26 10:25:23.651927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.856 [2024-07-26 10:25:23.651943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.856 [2024-07-26 10:25:23.651998] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.856 [2024-07-26 10:25:23.652015] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.856 [2024-07-26 10:25:23.652107] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x286b5c0 00:15:10.856 [2024-07-26 10:25:23.652117] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.856 [2024-07-26 10:25:23.652274] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2869ee0 00:15:10.856 [2024-07-26 10:25:23.652402] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x286b5c0 00:15:10.856 [2024-07-26 10:25:23.652411] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x286b5c0 00:15:10.856 [2024-07-26 10:25:23.652500] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.856 pt2 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.856 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.114 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.114 "name": "raid_bdev1", 00:15:11.114 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:11.114 "strip_size_kb": 0, 00:15:11.114 "state": "online", 00:15:11.114 "raid_level": "raid1", 00:15:11.114 "superblock": true, 00:15:11.114 "num_base_bdevs": 2, 00:15:11.114 "num_base_bdevs_discovered": 2, 00:15:11.114 "num_base_bdevs_operational": 2, 00:15:11.114 "base_bdevs_list": [ 00:15:11.114 { 00:15:11.114 "name": "pt1", 00:15:11.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.114 "is_configured": true, 00:15:11.114 "data_offset": 2048, 00:15:11.114 "data_size": 63488 00:15:11.114 }, 00:15:11.114 { 00:15:11.114 "name": "pt2", 00:15:11.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.114 "is_configured": true, 00:15:11.114 "data_offset": 2048, 00:15:11.114 "data_size": 63488 00:15:11.114 } 00:15:11.114 ] 00:15:11.114 }' 00:15:11.114 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.114 10:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.681 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:11.941 [2024-07-26 10:25:24.658427] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:11.941 "name": "raid_bdev1", 00:15:11.941 "aliases": [ 00:15:11.941 "b8306420-52cd-4701-9529-29972e7bb431" 00:15:11.941 ], 00:15:11.941 "product_name": "Raid Volume", 00:15:11.941 "block_size": 512, 00:15:11.941 "num_blocks": 63488, 00:15:11.941 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:11.941 "assigned_rate_limits": { 00:15:11.941 "rw_ios_per_sec": 0, 00:15:11.941 "rw_mbytes_per_sec": 0, 00:15:11.941 "r_mbytes_per_sec": 0, 00:15:11.941 "w_mbytes_per_sec": 0 00:15:11.941 }, 00:15:11.941 "claimed": false, 00:15:11.941 "zoned": false, 00:15:11.941 "supported_io_types": { 00:15:11.941 "read": true, 00:15:11.941 "write": true, 00:15:11.941 "unmap": false, 00:15:11.941 "flush": false, 00:15:11.941 "reset": true, 00:15:11.941 "nvme_admin": false, 00:15:11.941 "nvme_io": false, 00:15:11.941 "nvme_io_md": false, 00:15:11.941 "write_zeroes": true, 00:15:11.941 "zcopy": false, 00:15:11.941 "get_zone_info": false, 00:15:11.941 "zone_management": false, 00:15:11.941 "zone_append": false, 00:15:11.941 "compare": false, 00:15:11.941 "compare_and_write": false, 00:15:11.941 "abort": false, 00:15:11.941 "seek_hole": false, 00:15:11.941 "seek_data": false, 00:15:11.941 "copy": false, 00:15:11.941 "nvme_iov_md": false 00:15:11.941 }, 00:15:11.941 "memory_domains": [ 00:15:11.941 { 00:15:11.941 "dma_device_id": "system", 00:15:11.941 "dma_device_type": 1 00:15:11.941 }, 00:15:11.941 { 00:15:11.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.941 "dma_device_type": 2 00:15:11.941 }, 00:15:11.941 { 00:15:11.941 "dma_device_id": "system", 00:15:11.941 "dma_device_type": 1 00:15:11.941 }, 00:15:11.941 { 00:15:11.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.941 "dma_device_type": 2 00:15:11.941 } 00:15:11.941 ], 00:15:11.941 "driver_specific": { 00:15:11.941 "raid": { 00:15:11.941 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:11.941 "strip_size_kb": 0, 00:15:11.941 "state": "online", 00:15:11.941 "raid_level": "raid1", 00:15:11.941 "superblock": true, 00:15:11.941 "num_base_bdevs": 2, 00:15:11.941 "num_base_bdevs_discovered": 2, 00:15:11.941 "num_base_bdevs_operational": 2, 00:15:11.941 "base_bdevs_list": [ 00:15:11.941 { 00:15:11.941 "name": "pt1", 00:15:11.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.941 "is_configured": true, 00:15:11.941 "data_offset": 2048, 00:15:11.941 "data_size": 63488 00:15:11.941 }, 00:15:11.941 { 00:15:11.941 "name": "pt2", 00:15:11.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.941 "is_configured": true, 00:15:11.941 "data_offset": 2048, 00:15:11.941 "data_size": 63488 00:15:11.941 } 00:15:11.941 ] 00:15:11.941 } 00:15:11.941 } 00:15:11.941 }' 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:11.941 pt2' 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:11.941 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.200 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.200 "name": "pt1", 00:15:12.200 "aliases": [ 00:15:12.200 "00000000-0000-0000-0000-000000000001" 00:15:12.200 ], 00:15:12.200 "product_name": "passthru", 00:15:12.200 "block_size": 512, 00:15:12.200 "num_blocks": 65536, 00:15:12.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.200 "assigned_rate_limits": { 00:15:12.200 "rw_ios_per_sec": 0, 00:15:12.200 "rw_mbytes_per_sec": 0, 00:15:12.200 "r_mbytes_per_sec": 0, 00:15:12.200 "w_mbytes_per_sec": 0 00:15:12.200 }, 00:15:12.201 "claimed": true, 00:15:12.201 "claim_type": "exclusive_write", 00:15:12.201 "zoned": false, 00:15:12.201 "supported_io_types": { 00:15:12.201 "read": true, 00:15:12.201 "write": true, 00:15:12.201 "unmap": true, 00:15:12.201 "flush": true, 00:15:12.201 "reset": true, 00:15:12.201 "nvme_admin": false, 00:15:12.201 "nvme_io": false, 00:15:12.201 "nvme_io_md": false, 00:15:12.201 "write_zeroes": true, 00:15:12.201 "zcopy": true, 00:15:12.201 "get_zone_info": false, 00:15:12.201 "zone_management": false, 00:15:12.201 "zone_append": false, 00:15:12.201 "compare": false, 00:15:12.201 "compare_and_write": false, 00:15:12.201 "abort": true, 00:15:12.201 "seek_hole": false, 00:15:12.201 "seek_data": false, 00:15:12.201 "copy": true, 00:15:12.201 "nvme_iov_md": false 00:15:12.201 }, 00:15:12.201 "memory_domains": [ 00:15:12.201 { 00:15:12.201 "dma_device_id": "system", 00:15:12.201 "dma_device_type": 1 00:15:12.201 }, 00:15:12.201 { 00:15:12.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.201 "dma_device_type": 2 00:15:12.201 } 00:15:12.201 ], 00:15:12.201 "driver_specific": { 00:15:12.201 "passthru": { 00:15:12.201 "name": "pt1", 00:15:12.201 "base_bdev_name": "malloc1" 00:15:12.201 } 00:15:12.201 } 00:15:12.201 }' 00:15:12.201 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.201 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.201 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.201 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.201 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:12.460 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.719 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.720 "name": "pt2", 00:15:12.720 "aliases": [ 00:15:12.720 "00000000-0000-0000-0000-000000000002" 00:15:12.720 ], 00:15:12.720 "product_name": "passthru", 00:15:12.720 "block_size": 512, 00:15:12.720 "num_blocks": 65536, 00:15:12.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.720 "assigned_rate_limits": { 00:15:12.720 "rw_ios_per_sec": 0, 00:15:12.720 "rw_mbytes_per_sec": 0, 00:15:12.720 "r_mbytes_per_sec": 0, 00:15:12.720 "w_mbytes_per_sec": 0 00:15:12.720 }, 00:15:12.720 "claimed": true, 00:15:12.720 "claim_type": "exclusive_write", 00:15:12.720 "zoned": false, 00:15:12.720 "supported_io_types": { 00:15:12.720 "read": true, 00:15:12.720 "write": true, 00:15:12.720 "unmap": true, 00:15:12.720 "flush": true, 00:15:12.720 "reset": true, 00:15:12.720 "nvme_admin": false, 00:15:12.720 "nvme_io": false, 00:15:12.720 "nvme_io_md": false, 00:15:12.720 "write_zeroes": true, 00:15:12.720 "zcopy": true, 00:15:12.720 "get_zone_info": false, 00:15:12.720 "zone_management": false, 00:15:12.720 "zone_append": false, 00:15:12.720 "compare": false, 00:15:12.720 "compare_and_write": false, 00:15:12.720 "abort": true, 00:15:12.720 "seek_hole": false, 00:15:12.720 "seek_data": false, 00:15:12.720 "copy": true, 00:15:12.720 "nvme_iov_md": false 00:15:12.720 }, 00:15:12.720 "memory_domains": [ 00:15:12.720 { 00:15:12.720 "dma_device_id": "system", 00:15:12.720 "dma_device_type": 1 00:15:12.720 }, 00:15:12.720 { 00:15:12.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.720 "dma_device_type": 2 00:15:12.720 } 00:15:12.720 ], 00:15:12.720 "driver_specific": { 00:15:12.720 "passthru": { 00:15:12.720 "name": "pt2", 00:15:12.720 "base_bdev_name": "malloc2" 00:15:12.720 } 00:15:12.720 } 00:15:12.720 }' 00:15:12.720 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.720 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.720 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.720 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.980 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:13.334 [2024-07-26 10:25:26.074152] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.334 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' b8306420-52cd-4701-9529-29972e7bb431 '!=' b8306420-52cd-4701-9529-29972e7bb431 ']' 00:15:13.334 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:15:13.334 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.334 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:13.334 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:13.593 [2024-07-26 10:25:26.306548] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.593 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.852 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.852 "name": "raid_bdev1", 00:15:13.852 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:13.852 "strip_size_kb": 0, 00:15:13.852 "state": "online", 00:15:13.852 "raid_level": "raid1", 00:15:13.852 "superblock": true, 00:15:13.852 "num_base_bdevs": 2, 00:15:13.852 "num_base_bdevs_discovered": 1, 00:15:13.852 "num_base_bdevs_operational": 1, 00:15:13.852 "base_bdevs_list": [ 00:15:13.852 { 00:15:13.852 "name": null, 00:15:13.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.852 "is_configured": false, 00:15:13.852 "data_offset": 2048, 00:15:13.852 "data_size": 63488 00:15:13.852 }, 00:15:13.852 { 00:15:13.852 "name": "pt2", 00:15:13.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.852 "is_configured": true, 00:15:13.852 "data_offset": 2048, 00:15:13.852 "data_size": 63488 00:15:13.852 } 00:15:13.852 ] 00:15:13.852 }' 00:15:13.852 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.852 10:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.421 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:14.421 [2024-07-26 10:25:27.321219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.421 [2024-07-26 10:25:27.321246] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.421 [2024-07-26 10:25:27.321302] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.421 [2024-07-26 10:25:27.321344] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.421 [2024-07-26 10:25:27.321354] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x286b5c0 name raid_bdev1, state offline 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:15:14.680 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:15:14.939 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.199 [2024-07-26 10:25:27.990957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.199 [2024-07-26 10:25:27.991005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.199 [2024-07-26 10:25:27.991022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e3e20 00:15:15.199 [2024-07-26 10:25:27.991033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.199 [2024-07-26 10:25:27.992530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.199 [2024-07-26 10:25:27.992559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.199 [2024-07-26 10:25:27.992624] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:15.199 [2024-07-26 10:25:27.992649] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.199 [2024-07-26 10:25:27.992728] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2869720 00:15:15.199 [2024-07-26 10:25:27.992738] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.199 [2024-07-26 10:25:27.992896] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2731e20 00:15:15.199 [2024-07-26 10:25:27.993009] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2869720 00:15:15.199 [2024-07-26 10:25:27.993018] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2869720 00:15:15.199 [2024-07-26 10:25:27.993108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.199 pt2 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.199 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.458 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.458 "name": "raid_bdev1", 00:15:15.458 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:15.458 "strip_size_kb": 0, 00:15:15.458 "state": "online", 00:15:15.458 "raid_level": "raid1", 00:15:15.458 "superblock": true, 00:15:15.458 "num_base_bdevs": 2, 00:15:15.458 "num_base_bdevs_discovered": 1, 00:15:15.458 "num_base_bdevs_operational": 1, 00:15:15.458 "base_bdevs_list": [ 00:15:15.458 { 00:15:15.458 "name": null, 00:15:15.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.458 "is_configured": false, 00:15:15.458 "data_offset": 2048, 00:15:15.458 "data_size": 63488 00:15:15.458 }, 00:15:15.458 { 00:15:15.458 "name": "pt2", 00:15:15.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.458 "is_configured": true, 00:15:15.458 "data_offset": 2048, 00:15:15.458 "data_size": 63488 00:15:15.458 } 00:15:15.458 ] 00:15:15.458 }' 00:15:15.458 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.458 10:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.027 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:16.286 [2024-07-26 10:25:29.021671] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.286 [2024-07-26 10:25:29.021698] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.286 [2024-07-26 10:25:29.021752] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.286 [2024-07-26 10:25:29.021792] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.286 [2024-07-26 10:25:29.021803] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2869720 name raid_bdev1, state offline 00:15:16.286 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:15:16.286 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.547 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:15:16.547 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:15:16.547 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:15:16.547 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.117 [2024-07-26 10:25:29.739525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.117 [2024-07-26 10:25:29.739571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.117 [2024-07-26 10:25:29.739589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28683e0 00:15:17.117 [2024-07-26 10:25:29.739600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.117 [2024-07-26 10:25:29.741102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.117 [2024-07-26 10:25:29.741129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.117 [2024-07-26 10:25:29.741202] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:17.117 [2024-07-26 10:25:29.741227] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.117 [2024-07-26 10:25:29.741318] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:17.117 [2024-07-26 10:25:29.741330] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.117 [2024-07-26 10:25:29.741342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2869ff0 name raid_bdev1, state configuring 00:15:17.117 [2024-07-26 10:25:29.741362] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.117 [2024-07-26 10:25:29.741413] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2868a80 00:15:17.117 [2024-07-26 10:25:29.741422] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.117 [2024-07-26 10:25:29.741578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27477f0 00:15:17.117 [2024-07-26 10:25:29.741689] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2868a80 00:15:17.117 [2024-07-26 10:25:29.741698] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2868a80 00:15:17.117 [2024-07-26 10:25:29.741790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.117 pt1 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.117 "name": "raid_bdev1", 00:15:17.117 "uuid": "b8306420-52cd-4701-9529-29972e7bb431", 00:15:17.117 "strip_size_kb": 0, 00:15:17.117 "state": "online", 00:15:17.117 "raid_level": "raid1", 00:15:17.117 "superblock": true, 00:15:17.117 "num_base_bdevs": 2, 00:15:17.117 "num_base_bdevs_discovered": 1, 00:15:17.117 "num_base_bdevs_operational": 1, 00:15:17.117 "base_bdevs_list": [ 00:15:17.117 { 00:15:17.117 "name": null, 00:15:17.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.117 "is_configured": false, 00:15:17.117 "data_offset": 2048, 00:15:17.117 "data_size": 63488 00:15:17.117 }, 00:15:17.117 { 00:15:17.117 "name": "pt2", 00:15:17.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.117 "is_configured": true, 00:15:17.117 "data_offset": 2048, 00:15:17.117 "data_size": 63488 00:15:17.117 } 00:15:17.117 ] 00:15:17.117 }' 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.117 10:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.685 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:17.685 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:17.943 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:15:17.943 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:17.943 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:15:18.203 [2024-07-26 10:25:30.991040] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' b8306420-52cd-4701-9529-29972e7bb431 '!=' b8306420-52cd-4701-9529-29972e7bb431 ']' 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3364872 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3364872 ']' 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3364872 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3364872 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3364872' 00:15:18.203 killing process with pid 3364872 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3364872 00:15:18.203 [2024-07-26 10:25:31.066110] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.203 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3364872 00:15:18.203 [2024-07-26 10:25:31.066173] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.203 [2024-07-26 10:25:31.066226] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.203 [2024-07-26 10:25:31.066236] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2868a80 name raid_bdev1, state offline 00:15:18.203 [2024-07-26 10:25:31.082237] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.463 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:18.463 00:15:18.463 real 0m14.951s 00:15:18.463 user 0m27.122s 00:15:18.463 sys 0m2.764s 00:15:18.463 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.463 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.463 ************************************ 00:15:18.463 END TEST raid_superblock_test 00:15:18.463 ************************************ 00:15:18.463 10:25:31 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:15:18.463 10:25:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:18.463 10:25:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.463 10:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.463 ************************************ 00:15:18.463 START TEST raid_read_error_test 00:15:18.463 ************************************ 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.PBKWw38FIA 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3367744 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3367744 /var/tmp/spdk-raid.sock 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3367744 ']' 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.463 10:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:18.723 [2024-07-26 10:25:31.420186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:15:18.723 [2024-07-26 10:25:31.420247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367744 ] 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.723 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:18.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:18.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:18.724 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:18.724 [2024-07-26 10:25:31.552037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.724 [2024-07-26 10:25:31.596527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.983 [2024-07-26 10:25:31.650074] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.983 [2024-07-26 10:25:31.650102] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.551 10:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.551 10:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:19.551 10:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:19.551 10:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.120 BaseBdev1_malloc 00:15:20.120 10:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:20.380 true 00:15:20.380 10:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:20.643 [2024-07-26 10:25:33.537691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:20.643 [2024-07-26 10:25:33.537734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.643 [2024-07-26 10:25:33.537753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a867c0 00:15:20.643 [2024-07-26 10:25:33.537766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.643 [2024-07-26 10:25:33.539364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.643 [2024-07-26 10:25:33.539390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.643 BaseBdev1 00:15:20.902 10:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:20.902 10:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.902 BaseBdev2_malloc 00:15:20.902 10:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:21.161 true 00:15:21.161 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:21.420 [2024-07-26 10:25:34.244073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:21.420 [2024-07-26 10:25:34.244111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.420 [2024-07-26 10:25:34.244131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a2d960 00:15:21.420 [2024-07-26 10:25:34.244152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.420 [2024-07-26 10:25:34.245443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.420 [2024-07-26 10:25:34.245471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:21.420 BaseBdev2 00:15:21.420 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:21.680 [2024-07-26 10:25:34.484733] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.680 [2024-07-26 10:25:34.485855] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.680 [2024-07-26 10:25:34.486001] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x18d4860 00:15:21.680 [2024-07-26 10:25:34.486013] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.680 [2024-07-26 10:25:34.486194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a32060 00:15:21.680 [2024-07-26 10:25:34.486324] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18d4860 00:15:21.680 [2024-07-26 10:25:34.486334] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18d4860 00:15:21.680 [2024-07-26 10:25:34.486442] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.680 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.939 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.939 "name": "raid_bdev1", 00:15:21.939 "uuid": "7a4f58b9-9115-4793-94cd-875e9ddfd3b2", 00:15:21.939 "strip_size_kb": 0, 00:15:21.939 "state": "online", 00:15:21.939 "raid_level": "raid1", 00:15:21.939 "superblock": true, 00:15:21.939 "num_base_bdevs": 2, 00:15:21.939 "num_base_bdevs_discovered": 2, 00:15:21.939 "num_base_bdevs_operational": 2, 00:15:21.939 "base_bdevs_list": [ 00:15:21.939 { 00:15:21.939 "name": "BaseBdev1", 00:15:21.939 "uuid": "d2907aa6-f96f-5cdb-9779-44900d6569ad", 00:15:21.939 "is_configured": true, 00:15:21.939 "data_offset": 2048, 00:15:21.939 "data_size": 63488 00:15:21.939 }, 00:15:21.939 { 00:15:21.939 "name": "BaseBdev2", 00:15:21.939 "uuid": "c1e8c509-4362-5cf4-81c7-a117947ada56", 00:15:21.939 "is_configured": true, 00:15:21.939 "data_offset": 2048, 00:15:21.939 "data_size": 63488 00:15:21.939 } 00:15:21.939 ] 00:15:21.939 }' 00:15:21.939 10:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.939 10:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.507 10:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:22.507 10:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:22.507 [2024-07-26 10:25:35.379326] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a32190 00:15:23.444 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.703 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.963 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.963 "name": "raid_bdev1", 00:15:23.963 "uuid": "7a4f58b9-9115-4793-94cd-875e9ddfd3b2", 00:15:23.963 "strip_size_kb": 0, 00:15:23.963 "state": "online", 00:15:23.963 "raid_level": "raid1", 00:15:23.963 "superblock": true, 00:15:23.963 "num_base_bdevs": 2, 00:15:23.963 "num_base_bdevs_discovered": 2, 00:15:23.963 "num_base_bdevs_operational": 2, 00:15:23.963 "base_bdevs_list": [ 00:15:23.963 { 00:15:23.963 "name": "BaseBdev1", 00:15:23.963 "uuid": "d2907aa6-f96f-5cdb-9779-44900d6569ad", 00:15:23.963 "is_configured": true, 00:15:23.963 "data_offset": 2048, 00:15:23.963 "data_size": 63488 00:15:23.963 }, 00:15:23.963 { 00:15:23.963 "name": "BaseBdev2", 00:15:23.963 "uuid": "c1e8c509-4362-5cf4-81c7-a117947ada56", 00:15:23.963 "is_configured": true, 00:15:23.963 "data_offset": 2048, 00:15:23.963 "data_size": 63488 00:15:23.963 } 00:15:23.963 ] 00:15:23.963 }' 00:15:23.963 10:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.963 10:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.531 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:24.789 [2024-07-26 10:25:37.555938] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.789 [2024-07-26 10:25:37.555980] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.789 [2024-07-26 10:25:37.558921] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.789 [2024-07-26 10:25:37.558951] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.789 [2024-07-26 10:25:37.559015] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.789 [2024-07-26 10:25:37.559025] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18d4860 name raid_bdev1, state offline 00:15:24.789 0 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3367744 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3367744 ']' 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3367744 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3367744 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3367744' 00:15:24.789 killing process with pid 3367744 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3367744 00:15:24.789 [2024-07-26 10:25:37.634012] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.789 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3367744 00:15:24.789 [2024-07-26 10:25:37.643629] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.PBKWw38FIA 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:25.048 00:15:25.048 real 0m6.487s 00:15:25.048 user 0m10.260s 00:15:25.048 sys 0m1.114s 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.048 10:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.048 ************************************ 00:15:25.048 END TEST raid_read_error_test 00:15:25.048 ************************************ 00:15:25.048 10:25:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:15:25.048 10:25:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:25.048 10:25:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.048 10:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.048 ************************************ 00:15:25.048 START TEST raid_write_error_test 00:15:25.048 ************************************ 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.pRDHg3nvxp 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3368902 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3368902 /var/tmp/spdk-raid.sock 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3368902 ']' 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:25.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.048 10:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.308 [2024-07-26 10:25:37.990965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:15:25.308 [2024-07-26 10:25:37.991020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368902 ] 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:25.308 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:25.308 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:25.308 [2024-07-26 10:25:38.124205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.308 [2024-07-26 10:25:38.169026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.567 [2024-07-26 10:25:38.235626] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.567 [2024-07-26 10:25:38.235658] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.136 10:25:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.136 10:25:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:26.136 10:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:26.136 10:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.395 BaseBdev1_malloc 00:15:26.395 10:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:26.657 true 00:15:26.657 10:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:26.914 [2024-07-26 10:25:39.567562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:26.914 [2024-07-26 10:25:39.567602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.914 [2024-07-26 10:25:39.567620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ae7c0 00:15:26.914 [2024-07-26 10:25:39.567632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.914 [2024-07-26 10:25:39.569208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.914 [2024-07-26 10:25:39.569236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.914 BaseBdev1 00:15:26.914 10:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:26.914 10:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.914 BaseBdev2_malloc 00:15:26.914 10:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:27.173 true 00:15:27.173 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:27.432 [2024-07-26 10:25:40.257750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:27.432 [2024-07-26 10:25:40.257791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.432 [2024-07-26 10:25:40.257812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2455960 00:15:27.432 [2024-07-26 10:25:40.257828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.432 [2024-07-26 10:25:40.259258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.432 [2024-07-26 10:25:40.259285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:27.432 BaseBdev2 00:15:27.432 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:27.691 [2024-07-26 10:25:40.470323] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.691 [2024-07-26 10:25:40.471445] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.691 [2024-07-26 10:25:40.471599] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x22fc860 00:15:27.691 [2024-07-26 10:25:40.471611] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:27.691 [2024-07-26 10:25:40.471788] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x245a060 00:15:27.691 [2024-07-26 10:25:40.471915] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22fc860 00:15:27.691 [2024-07-26 10:25:40.471924] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22fc860 00:15:27.691 [2024-07-26 10:25:40.472033] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.691 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.950 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.950 "name": "raid_bdev1", 00:15:27.950 "uuid": "c8f72179-8861-445e-982c-adc0cd741719", 00:15:27.950 "strip_size_kb": 0, 00:15:27.950 "state": "online", 00:15:27.950 "raid_level": "raid1", 00:15:27.950 "superblock": true, 00:15:27.950 "num_base_bdevs": 2, 00:15:27.950 "num_base_bdevs_discovered": 2, 00:15:27.950 "num_base_bdevs_operational": 2, 00:15:27.950 "base_bdevs_list": [ 00:15:27.950 { 00:15:27.950 "name": "BaseBdev1", 00:15:27.950 "uuid": "74fb7fb5-cca1-5960-bc04-843e5edef7f0", 00:15:27.950 "is_configured": true, 00:15:27.950 "data_offset": 2048, 00:15:27.950 "data_size": 63488 00:15:27.950 }, 00:15:27.950 { 00:15:27.950 "name": "BaseBdev2", 00:15:27.950 "uuid": "556c5dbc-248b-5085-86cc-fa40857078ac", 00:15:27.950 "is_configured": true, 00:15:27.950 "data_offset": 2048, 00:15:27.950 "data_size": 63488 00:15:27.950 } 00:15:27.950 ] 00:15:27.950 }' 00:15:27.950 10:25:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.950 10:25:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.517 10:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:28.517 10:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:28.517 [2024-07-26 10:25:41.392996] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x245a190 00:15:29.455 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:29.714 [2024-07-26 10:25:42.511603] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:29.714 [2024-07-26 10:25:42.511652] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.715 [2024-07-26 10:25:42.511824] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x245a190 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.715 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.019 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.019 "name": "raid_bdev1", 00:15:30.019 "uuid": "c8f72179-8861-445e-982c-adc0cd741719", 00:15:30.019 "strip_size_kb": 0, 00:15:30.019 "state": "online", 00:15:30.019 "raid_level": "raid1", 00:15:30.019 "superblock": true, 00:15:30.019 "num_base_bdevs": 2, 00:15:30.019 "num_base_bdevs_discovered": 1, 00:15:30.019 "num_base_bdevs_operational": 1, 00:15:30.019 "base_bdevs_list": [ 00:15:30.019 { 00:15:30.020 "name": null, 00:15:30.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.020 "is_configured": false, 00:15:30.020 "data_offset": 2048, 00:15:30.020 "data_size": 63488 00:15:30.020 }, 00:15:30.020 { 00:15:30.020 "name": "BaseBdev2", 00:15:30.020 "uuid": "556c5dbc-248b-5085-86cc-fa40857078ac", 00:15:30.020 "is_configured": true, 00:15:30.020 "data_offset": 2048, 00:15:30.020 "data_size": 63488 00:15:30.020 } 00:15:30.020 ] 00:15:30.020 }' 00:15:30.020 10:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.020 10:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.607 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:30.607 [2024-07-26 10:25:43.494044] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.607 [2024-07-26 10:25:43.494075] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.607 [2024-07-26 10:25:43.496941] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.607 [2024-07-26 10:25:43.496966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.607 [2024-07-26 10:25:43.497012] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.607 [2024-07-26 10:25:43.497022] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22fc860 name raid_bdev1, state offline 00:15:30.607 0 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3368902 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3368902 ']' 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3368902 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3368902 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3368902' 00:15:30.865 killing process with pid 3368902 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3368902 00:15:30.865 [2024-07-26 10:25:43.574656] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3368902 00:15:30.865 [2024-07-26 10:25:43.583437] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.pRDHg3nvxp 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:30.865 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:31.124 00:15:31.124 real 0m5.856s 00:15:31.124 user 0m9.043s 00:15:31.124 sys 0m1.088s 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.124 10:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.124 ************************************ 00:15:31.124 END TEST raid_write_error_test 00:15:31.124 ************************************ 00:15:31.124 10:25:43 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:15:31.124 10:25:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:31.124 10:25:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:31.124 10:25:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:31.124 10:25:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.124 10:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.124 ************************************ 00:15:31.124 START TEST raid_state_function_test 00:15:31.124 ************************************ 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:31.124 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3370054 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3370054' 00:15:31.125 Process raid pid: 3370054 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3370054 /var/tmp/spdk-raid.sock 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3370054 ']' 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:31.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.125 10:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.125 [2024-07-26 10:25:43.924999] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:15:31.125 [2024-07-26 10:25:43.925053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:31.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:31.125 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:31.384 [2024-07-26 10:25:44.057094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.384 [2024-07-26 10:25:44.101417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.384 [2024-07-26 10:25:44.164845] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.384 [2024-07-26 10:25:44.164880] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.953 10:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.953 10:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:31.953 10:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:32.212 [2024-07-26 10:25:45.013069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.212 [2024-07-26 10:25:45.013106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.212 [2024-07-26 10:25:45.013116] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.212 [2024-07-26 10:25:45.013126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.212 [2024-07-26 10:25:45.013134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.212 [2024-07-26 10:25:45.013153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.212 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.471 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.471 "name": "Existed_Raid", 00:15:32.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.471 "strip_size_kb": 64, 00:15:32.471 "state": "configuring", 00:15:32.471 "raid_level": "raid0", 00:15:32.471 "superblock": false, 00:15:32.471 "num_base_bdevs": 3, 00:15:32.471 "num_base_bdevs_discovered": 0, 00:15:32.471 "num_base_bdevs_operational": 3, 00:15:32.471 "base_bdevs_list": [ 00:15:32.471 { 00:15:32.471 "name": "BaseBdev1", 00:15:32.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.471 "is_configured": false, 00:15:32.471 "data_offset": 0, 00:15:32.471 "data_size": 0 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "name": "BaseBdev2", 00:15:32.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.471 "is_configured": false, 00:15:32.471 "data_offset": 0, 00:15:32.471 "data_size": 0 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "name": "BaseBdev3", 00:15:32.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.471 "is_configured": false, 00:15:32.471 "data_offset": 0, 00:15:32.471 "data_size": 0 00:15:32.471 } 00:15:32.471 ] 00:15:32.471 }' 00:15:32.471 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.471 10:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.039 10:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:33.298 [2024-07-26 10:25:46.031711] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.298 [2024-07-26 10:25:46.031740] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c31b70 name Existed_Raid, state configuring 00:15:33.298 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:33.557 [2024-07-26 10:25:46.264345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.557 [2024-07-26 10:25:46.264368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.557 [2024-07-26 10:25:46.264377] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.557 [2024-07-26 10:25:46.264387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.557 [2024-07-26 10:25:46.264395] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.557 [2024-07-26 10:25:46.264405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.557 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.816 [2024-07-26 10:25:46.498251] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.816 BaseBdev1 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.816 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.075 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.075 [ 00:15:34.075 { 00:15:34.075 "name": "BaseBdev1", 00:15:34.075 "aliases": [ 00:15:34.075 "ee0426d4-ace8-4a43-a013-83c9d373f074" 00:15:34.075 ], 00:15:34.075 "product_name": "Malloc disk", 00:15:34.075 "block_size": 512, 00:15:34.075 "num_blocks": 65536, 00:15:34.075 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:34.075 "assigned_rate_limits": { 00:15:34.075 "rw_ios_per_sec": 0, 00:15:34.075 "rw_mbytes_per_sec": 0, 00:15:34.075 "r_mbytes_per_sec": 0, 00:15:34.075 "w_mbytes_per_sec": 0 00:15:34.075 }, 00:15:34.075 "claimed": true, 00:15:34.075 "claim_type": "exclusive_write", 00:15:34.075 "zoned": false, 00:15:34.075 "supported_io_types": { 00:15:34.075 "read": true, 00:15:34.075 "write": true, 00:15:34.075 "unmap": true, 00:15:34.075 "flush": true, 00:15:34.075 "reset": true, 00:15:34.075 "nvme_admin": false, 00:15:34.075 "nvme_io": false, 00:15:34.075 "nvme_io_md": false, 00:15:34.075 "write_zeroes": true, 00:15:34.075 "zcopy": true, 00:15:34.075 "get_zone_info": false, 00:15:34.075 "zone_management": false, 00:15:34.075 "zone_append": false, 00:15:34.075 "compare": false, 00:15:34.075 "compare_and_write": false, 00:15:34.075 "abort": true, 00:15:34.075 "seek_hole": false, 00:15:34.075 "seek_data": false, 00:15:34.075 "copy": true, 00:15:34.075 "nvme_iov_md": false 00:15:34.075 }, 00:15:34.075 "memory_domains": [ 00:15:34.075 { 00:15:34.075 "dma_device_id": "system", 00:15:34.075 "dma_device_type": 1 00:15:34.075 }, 00:15:34.075 { 00:15:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.075 "dma_device_type": 2 00:15:34.075 } 00:15:34.075 ], 00:15:34.075 "driver_specific": {} 00:15:34.076 } 00:15:34.076 ] 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.076 10:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.335 10:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.335 "name": "Existed_Raid", 00:15:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.335 "strip_size_kb": 64, 00:15:34.335 "state": "configuring", 00:15:34.335 "raid_level": "raid0", 00:15:34.335 "superblock": false, 00:15:34.335 "num_base_bdevs": 3, 00:15:34.335 "num_base_bdevs_discovered": 1, 00:15:34.335 "num_base_bdevs_operational": 3, 00:15:34.335 "base_bdevs_list": [ 00:15:34.335 { 00:15:34.335 "name": "BaseBdev1", 00:15:34.335 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:34.335 "is_configured": true, 00:15:34.335 "data_offset": 0, 00:15:34.335 "data_size": 65536 00:15:34.335 }, 00:15:34.335 { 00:15:34.335 "name": "BaseBdev2", 00:15:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.335 "is_configured": false, 00:15:34.335 "data_offset": 0, 00:15:34.335 "data_size": 0 00:15:34.335 }, 00:15:34.335 { 00:15:34.335 "name": "BaseBdev3", 00:15:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.335 "is_configured": false, 00:15:34.335 "data_offset": 0, 00:15:34.335 "data_size": 0 00:15:34.335 } 00:15:34.335 ] 00:15:34.335 }' 00:15:34.335 10:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.335 10:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.904 10:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:35.163 [2024-07-26 10:25:47.982173] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.163 [2024-07-26 10:25:47.982210] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c314a0 name Existed_Raid, state configuring 00:15:35.163 10:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:35.422 [2024-07-26 10:25:48.210802] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.422 [2024-07-26 10:25:48.212148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.422 [2024-07-26 10:25:48.212179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.422 [2024-07-26 10:25:48.212189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.422 [2024-07-26 10:25:48.212200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.422 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.682 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.682 "name": "Existed_Raid", 00:15:35.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.682 "strip_size_kb": 64, 00:15:35.682 "state": "configuring", 00:15:35.682 "raid_level": "raid0", 00:15:35.682 "superblock": false, 00:15:35.682 "num_base_bdevs": 3, 00:15:35.682 "num_base_bdevs_discovered": 1, 00:15:35.682 "num_base_bdevs_operational": 3, 00:15:35.682 "base_bdevs_list": [ 00:15:35.682 { 00:15:35.682 "name": "BaseBdev1", 00:15:35.682 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:35.682 "is_configured": true, 00:15:35.682 "data_offset": 0, 00:15:35.682 "data_size": 65536 00:15:35.682 }, 00:15:35.682 { 00:15:35.682 "name": "BaseBdev2", 00:15:35.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.682 "is_configured": false, 00:15:35.682 "data_offset": 0, 00:15:35.682 "data_size": 0 00:15:35.682 }, 00:15:35.682 { 00:15:35.682 "name": "BaseBdev3", 00:15:35.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.682 "is_configured": false, 00:15:35.682 "data_offset": 0, 00:15:35.682 "data_size": 0 00:15:35.682 } 00:15:35.682 ] 00:15:35.682 }' 00:15:35.682 10:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.682 10:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.251 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.510 [2024-07-26 10:25:49.236749] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.510 BaseBdev2 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:36.510 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.769 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.029 [ 00:15:37.029 { 00:15:37.029 "name": "BaseBdev2", 00:15:37.029 "aliases": [ 00:15:37.029 "bd7f5190-508d-4dba-8905-6f483983dfbd" 00:15:37.029 ], 00:15:37.029 "product_name": "Malloc disk", 00:15:37.029 "block_size": 512, 00:15:37.029 "num_blocks": 65536, 00:15:37.029 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:37.029 "assigned_rate_limits": { 00:15:37.029 "rw_ios_per_sec": 0, 00:15:37.029 "rw_mbytes_per_sec": 0, 00:15:37.029 "r_mbytes_per_sec": 0, 00:15:37.029 "w_mbytes_per_sec": 0 00:15:37.029 }, 00:15:37.029 "claimed": true, 00:15:37.029 "claim_type": "exclusive_write", 00:15:37.029 "zoned": false, 00:15:37.029 "supported_io_types": { 00:15:37.029 "read": true, 00:15:37.029 "write": true, 00:15:37.029 "unmap": true, 00:15:37.029 "flush": true, 00:15:37.029 "reset": true, 00:15:37.029 "nvme_admin": false, 00:15:37.029 "nvme_io": false, 00:15:37.029 "nvme_io_md": false, 00:15:37.029 "write_zeroes": true, 00:15:37.029 "zcopy": true, 00:15:37.029 "get_zone_info": false, 00:15:37.029 "zone_management": false, 00:15:37.029 "zone_append": false, 00:15:37.029 "compare": false, 00:15:37.029 "compare_and_write": false, 00:15:37.029 "abort": true, 00:15:37.029 "seek_hole": false, 00:15:37.029 "seek_data": false, 00:15:37.029 "copy": true, 00:15:37.029 "nvme_iov_md": false 00:15:37.029 }, 00:15:37.029 "memory_domains": [ 00:15:37.029 { 00:15:37.029 "dma_device_id": "system", 00:15:37.029 "dma_device_type": 1 00:15:37.029 }, 00:15:37.029 { 00:15:37.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.029 "dma_device_type": 2 00:15:37.029 } 00:15:37.029 ], 00:15:37.029 "driver_specific": {} 00:15:37.029 } 00:15:37.029 ] 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.029 "name": "Existed_Raid", 00:15:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.029 "strip_size_kb": 64, 00:15:37.029 "state": "configuring", 00:15:37.029 "raid_level": "raid0", 00:15:37.029 "superblock": false, 00:15:37.029 "num_base_bdevs": 3, 00:15:37.029 "num_base_bdevs_discovered": 2, 00:15:37.029 "num_base_bdevs_operational": 3, 00:15:37.029 "base_bdevs_list": [ 00:15:37.029 { 00:15:37.029 "name": "BaseBdev1", 00:15:37.029 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:37.029 "is_configured": true, 00:15:37.029 "data_offset": 0, 00:15:37.029 "data_size": 65536 00:15:37.029 }, 00:15:37.029 { 00:15:37.029 "name": "BaseBdev2", 00:15:37.029 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:37.029 "is_configured": true, 00:15:37.029 "data_offset": 0, 00:15:37.029 "data_size": 65536 00:15:37.029 }, 00:15:37.029 { 00:15:37.029 "name": "BaseBdev3", 00:15:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.029 "is_configured": false, 00:15:37.029 "data_offset": 0, 00:15:37.029 "data_size": 0 00:15:37.029 } 00:15:37.029 ] 00:15:37.029 }' 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.029 10:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.967 [2024-07-26 10:25:50.727988] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.967 [2024-07-26 10:25:50.728021] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1de42d0 00:15:37.967 [2024-07-26 10:25:50.728028] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:37.967 [2024-07-26 10:25:50.728270] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c38560 00:15:37.967 [2024-07-26 10:25:50.728381] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1de42d0 00:15:37.967 [2024-07-26 10:25:50.728390] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1de42d0 00:15:37.967 [2024-07-26 10:25:50.728538] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.967 BaseBdev3 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:37.967 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.226 10:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.486 [ 00:15:38.486 { 00:15:38.486 "name": "BaseBdev3", 00:15:38.486 "aliases": [ 00:15:38.486 "89deb6c4-de5b-457a-984c-11fc563f1bab" 00:15:38.486 ], 00:15:38.486 "product_name": "Malloc disk", 00:15:38.486 "block_size": 512, 00:15:38.486 "num_blocks": 65536, 00:15:38.486 "uuid": "89deb6c4-de5b-457a-984c-11fc563f1bab", 00:15:38.486 "assigned_rate_limits": { 00:15:38.486 "rw_ios_per_sec": 0, 00:15:38.486 "rw_mbytes_per_sec": 0, 00:15:38.486 "r_mbytes_per_sec": 0, 00:15:38.486 "w_mbytes_per_sec": 0 00:15:38.486 }, 00:15:38.486 "claimed": true, 00:15:38.486 "claim_type": "exclusive_write", 00:15:38.486 "zoned": false, 00:15:38.486 "supported_io_types": { 00:15:38.486 "read": true, 00:15:38.486 "write": true, 00:15:38.486 "unmap": true, 00:15:38.486 "flush": true, 00:15:38.486 "reset": true, 00:15:38.486 "nvme_admin": false, 00:15:38.486 "nvme_io": false, 00:15:38.486 "nvme_io_md": false, 00:15:38.486 "write_zeroes": true, 00:15:38.486 "zcopy": true, 00:15:38.486 "get_zone_info": false, 00:15:38.486 "zone_management": false, 00:15:38.486 "zone_append": false, 00:15:38.486 "compare": false, 00:15:38.486 "compare_and_write": false, 00:15:38.486 "abort": true, 00:15:38.486 "seek_hole": false, 00:15:38.486 "seek_data": false, 00:15:38.486 "copy": true, 00:15:38.486 "nvme_iov_md": false 00:15:38.486 }, 00:15:38.486 "memory_domains": [ 00:15:38.486 { 00:15:38.486 "dma_device_id": "system", 00:15:38.486 "dma_device_type": 1 00:15:38.486 }, 00:15:38.486 { 00:15:38.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.486 "dma_device_type": 2 00:15:38.486 } 00:15:38.486 ], 00:15:38.486 "driver_specific": {} 00:15:38.486 } 00:15:38.486 ] 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.486 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.746 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.746 "name": "Existed_Raid", 00:15:38.746 "uuid": "50c8c63f-88ae-4ea8-a723-99b550415840", 00:15:38.746 "strip_size_kb": 64, 00:15:38.746 "state": "online", 00:15:38.746 "raid_level": "raid0", 00:15:38.746 "superblock": false, 00:15:38.746 "num_base_bdevs": 3, 00:15:38.746 "num_base_bdevs_discovered": 3, 00:15:38.746 "num_base_bdevs_operational": 3, 00:15:38.746 "base_bdevs_list": [ 00:15:38.746 { 00:15:38.746 "name": "BaseBdev1", 00:15:38.746 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:38.746 "is_configured": true, 00:15:38.746 "data_offset": 0, 00:15:38.746 "data_size": 65536 00:15:38.746 }, 00:15:38.746 { 00:15:38.746 "name": "BaseBdev2", 00:15:38.746 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:38.746 "is_configured": true, 00:15:38.746 "data_offset": 0, 00:15:38.746 "data_size": 65536 00:15:38.746 }, 00:15:38.746 { 00:15:38.746 "name": "BaseBdev3", 00:15:38.746 "uuid": "89deb6c4-de5b-457a-984c-11fc563f1bab", 00:15:38.746 "is_configured": true, 00:15:38.746 "data_offset": 0, 00:15:38.746 "data_size": 65536 00:15:38.746 } 00:15:38.746 ] 00:15:38.746 }' 00:15:38.746 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.746 10:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:39.314 10:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:39.314 [2024-07-26 10:25:52.204203] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.573 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:39.573 "name": "Existed_Raid", 00:15:39.573 "aliases": [ 00:15:39.573 "50c8c63f-88ae-4ea8-a723-99b550415840" 00:15:39.573 ], 00:15:39.573 "product_name": "Raid Volume", 00:15:39.573 "block_size": 512, 00:15:39.573 "num_blocks": 196608, 00:15:39.573 "uuid": "50c8c63f-88ae-4ea8-a723-99b550415840", 00:15:39.573 "assigned_rate_limits": { 00:15:39.573 "rw_ios_per_sec": 0, 00:15:39.573 "rw_mbytes_per_sec": 0, 00:15:39.573 "r_mbytes_per_sec": 0, 00:15:39.573 "w_mbytes_per_sec": 0 00:15:39.573 }, 00:15:39.574 "claimed": false, 00:15:39.574 "zoned": false, 00:15:39.574 "supported_io_types": { 00:15:39.574 "read": true, 00:15:39.574 "write": true, 00:15:39.574 "unmap": true, 00:15:39.574 "flush": true, 00:15:39.574 "reset": true, 00:15:39.574 "nvme_admin": false, 00:15:39.574 "nvme_io": false, 00:15:39.574 "nvme_io_md": false, 00:15:39.574 "write_zeroes": true, 00:15:39.574 "zcopy": false, 00:15:39.574 "get_zone_info": false, 00:15:39.574 "zone_management": false, 00:15:39.574 "zone_append": false, 00:15:39.574 "compare": false, 00:15:39.574 "compare_and_write": false, 00:15:39.574 "abort": false, 00:15:39.574 "seek_hole": false, 00:15:39.574 "seek_data": false, 00:15:39.574 "copy": false, 00:15:39.574 "nvme_iov_md": false 00:15:39.574 }, 00:15:39.574 "memory_domains": [ 00:15:39.574 { 00:15:39.574 "dma_device_id": "system", 00:15:39.574 "dma_device_type": 1 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.574 "dma_device_type": 2 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "dma_device_id": "system", 00:15:39.574 "dma_device_type": 1 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.574 "dma_device_type": 2 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "dma_device_id": "system", 00:15:39.574 "dma_device_type": 1 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.574 "dma_device_type": 2 00:15:39.574 } 00:15:39.574 ], 00:15:39.574 "driver_specific": { 00:15:39.574 "raid": { 00:15:39.574 "uuid": "50c8c63f-88ae-4ea8-a723-99b550415840", 00:15:39.574 "strip_size_kb": 64, 00:15:39.574 "state": "online", 00:15:39.574 "raid_level": "raid0", 00:15:39.574 "superblock": false, 00:15:39.574 "num_base_bdevs": 3, 00:15:39.574 "num_base_bdevs_discovered": 3, 00:15:39.574 "num_base_bdevs_operational": 3, 00:15:39.574 "base_bdevs_list": [ 00:15:39.574 { 00:15:39.574 "name": "BaseBdev1", 00:15:39.574 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:39.574 "is_configured": true, 00:15:39.574 "data_offset": 0, 00:15:39.574 "data_size": 65536 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "name": "BaseBdev2", 00:15:39.574 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:39.574 "is_configured": true, 00:15:39.574 "data_offset": 0, 00:15:39.574 "data_size": 65536 00:15:39.574 }, 00:15:39.574 { 00:15:39.574 "name": "BaseBdev3", 00:15:39.574 "uuid": "89deb6c4-de5b-457a-984c-11fc563f1bab", 00:15:39.574 "is_configured": true, 00:15:39.574 "data_offset": 0, 00:15:39.574 "data_size": 65536 00:15:39.574 } 00:15:39.574 ] 00:15:39.574 } 00:15:39.574 } 00:15:39.574 }' 00:15:39.574 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.574 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:39.574 BaseBdev2 00:15:39.574 BaseBdev3' 00:15:39.574 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.574 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:39.574 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.833 "name": "BaseBdev1", 00:15:39.833 "aliases": [ 00:15:39.833 "ee0426d4-ace8-4a43-a013-83c9d373f074" 00:15:39.833 ], 00:15:39.833 "product_name": "Malloc disk", 00:15:39.833 "block_size": 512, 00:15:39.833 "num_blocks": 65536, 00:15:39.833 "uuid": "ee0426d4-ace8-4a43-a013-83c9d373f074", 00:15:39.833 "assigned_rate_limits": { 00:15:39.833 "rw_ios_per_sec": 0, 00:15:39.833 "rw_mbytes_per_sec": 0, 00:15:39.833 "r_mbytes_per_sec": 0, 00:15:39.833 "w_mbytes_per_sec": 0 00:15:39.833 }, 00:15:39.833 "claimed": true, 00:15:39.833 "claim_type": "exclusive_write", 00:15:39.833 "zoned": false, 00:15:39.833 "supported_io_types": { 00:15:39.833 "read": true, 00:15:39.833 "write": true, 00:15:39.833 "unmap": true, 00:15:39.833 "flush": true, 00:15:39.833 "reset": true, 00:15:39.833 "nvme_admin": false, 00:15:39.833 "nvme_io": false, 00:15:39.833 "nvme_io_md": false, 00:15:39.833 "write_zeroes": true, 00:15:39.833 "zcopy": true, 00:15:39.833 "get_zone_info": false, 00:15:39.833 "zone_management": false, 00:15:39.833 "zone_append": false, 00:15:39.833 "compare": false, 00:15:39.833 "compare_and_write": false, 00:15:39.833 "abort": true, 00:15:39.833 "seek_hole": false, 00:15:39.833 "seek_data": false, 00:15:39.833 "copy": true, 00:15:39.833 "nvme_iov_md": false 00:15:39.833 }, 00:15:39.833 "memory_domains": [ 00:15:39.833 { 00:15:39.833 "dma_device_id": "system", 00:15:39.833 "dma_device_type": 1 00:15:39.833 }, 00:15:39.833 { 00:15:39.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.833 "dma_device_type": 2 00:15:39.833 } 00:15:39.833 ], 00:15:39.833 "driver_specific": {} 00:15:39.833 }' 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.833 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:40.093 10:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.352 "name": "BaseBdev2", 00:15:40.352 "aliases": [ 00:15:40.352 "bd7f5190-508d-4dba-8905-6f483983dfbd" 00:15:40.352 ], 00:15:40.352 "product_name": "Malloc disk", 00:15:40.352 "block_size": 512, 00:15:40.352 "num_blocks": 65536, 00:15:40.352 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:40.352 "assigned_rate_limits": { 00:15:40.352 "rw_ios_per_sec": 0, 00:15:40.352 "rw_mbytes_per_sec": 0, 00:15:40.352 "r_mbytes_per_sec": 0, 00:15:40.352 "w_mbytes_per_sec": 0 00:15:40.352 }, 00:15:40.352 "claimed": true, 00:15:40.352 "claim_type": "exclusive_write", 00:15:40.352 "zoned": false, 00:15:40.352 "supported_io_types": { 00:15:40.352 "read": true, 00:15:40.352 "write": true, 00:15:40.352 "unmap": true, 00:15:40.352 "flush": true, 00:15:40.352 "reset": true, 00:15:40.352 "nvme_admin": false, 00:15:40.352 "nvme_io": false, 00:15:40.352 "nvme_io_md": false, 00:15:40.352 "write_zeroes": true, 00:15:40.352 "zcopy": true, 00:15:40.352 "get_zone_info": false, 00:15:40.352 "zone_management": false, 00:15:40.352 "zone_append": false, 00:15:40.352 "compare": false, 00:15:40.352 "compare_and_write": false, 00:15:40.352 "abort": true, 00:15:40.352 "seek_hole": false, 00:15:40.352 "seek_data": false, 00:15:40.352 "copy": true, 00:15:40.352 "nvme_iov_md": false 00:15:40.352 }, 00:15:40.352 "memory_domains": [ 00:15:40.352 { 00:15:40.352 "dma_device_id": "system", 00:15:40.352 "dma_device_type": 1 00:15:40.352 }, 00:15:40.352 { 00:15:40.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.352 "dma_device_type": 2 00:15:40.352 } 00:15:40.352 ], 00:15:40.352 "driver_specific": {} 00:15:40.352 }' 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.352 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.619 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:40.620 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.881 "name": "BaseBdev3", 00:15:40.881 "aliases": [ 00:15:40.881 "89deb6c4-de5b-457a-984c-11fc563f1bab" 00:15:40.881 ], 00:15:40.881 "product_name": "Malloc disk", 00:15:40.881 "block_size": 512, 00:15:40.881 "num_blocks": 65536, 00:15:40.881 "uuid": "89deb6c4-de5b-457a-984c-11fc563f1bab", 00:15:40.881 "assigned_rate_limits": { 00:15:40.881 "rw_ios_per_sec": 0, 00:15:40.881 "rw_mbytes_per_sec": 0, 00:15:40.881 "r_mbytes_per_sec": 0, 00:15:40.881 "w_mbytes_per_sec": 0 00:15:40.881 }, 00:15:40.881 "claimed": true, 00:15:40.881 "claim_type": "exclusive_write", 00:15:40.881 "zoned": false, 00:15:40.881 "supported_io_types": { 00:15:40.881 "read": true, 00:15:40.881 "write": true, 00:15:40.881 "unmap": true, 00:15:40.881 "flush": true, 00:15:40.881 "reset": true, 00:15:40.881 "nvme_admin": false, 00:15:40.881 "nvme_io": false, 00:15:40.881 "nvme_io_md": false, 00:15:40.881 "write_zeroes": true, 00:15:40.881 "zcopy": true, 00:15:40.881 "get_zone_info": false, 00:15:40.881 "zone_management": false, 00:15:40.881 "zone_append": false, 00:15:40.881 "compare": false, 00:15:40.881 "compare_and_write": false, 00:15:40.881 "abort": true, 00:15:40.881 "seek_hole": false, 00:15:40.881 "seek_data": false, 00:15:40.881 "copy": true, 00:15:40.881 "nvme_iov_md": false 00:15:40.881 }, 00:15:40.881 "memory_domains": [ 00:15:40.881 { 00:15:40.881 "dma_device_id": "system", 00:15:40.881 "dma_device_type": 1 00:15:40.881 }, 00:15:40.881 { 00:15:40.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.881 "dma_device_type": 2 00:15:40.881 } 00:15:40.881 ], 00:15:40.881 "driver_specific": {} 00:15:40.881 }' 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.881 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.139 10:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.397 [2024-07-26 10:25:54.177166] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.397 [2024-07-26 10:25:54.177190] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.397 [2024-07-26 10:25:54.177228] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.397 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.702 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.702 "name": "Existed_Raid", 00:15:41.702 "uuid": "50c8c63f-88ae-4ea8-a723-99b550415840", 00:15:41.702 "strip_size_kb": 64, 00:15:41.702 "state": "offline", 00:15:41.702 "raid_level": "raid0", 00:15:41.702 "superblock": false, 00:15:41.702 "num_base_bdevs": 3, 00:15:41.702 "num_base_bdevs_discovered": 2, 00:15:41.702 "num_base_bdevs_operational": 2, 00:15:41.702 "base_bdevs_list": [ 00:15:41.702 { 00:15:41.702 "name": null, 00:15:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.702 "is_configured": false, 00:15:41.702 "data_offset": 0, 00:15:41.702 "data_size": 65536 00:15:41.702 }, 00:15:41.702 { 00:15:41.702 "name": "BaseBdev2", 00:15:41.702 "uuid": "bd7f5190-508d-4dba-8905-6f483983dfbd", 00:15:41.702 "is_configured": true, 00:15:41.702 "data_offset": 0, 00:15:41.702 "data_size": 65536 00:15:41.702 }, 00:15:41.702 { 00:15:41.702 "name": "BaseBdev3", 00:15:41.702 "uuid": "89deb6c4-de5b-457a-984c-11fc563f1bab", 00:15:41.702 "is_configured": true, 00:15:41.702 "data_offset": 0, 00:15:41.702 "data_size": 65536 00:15:41.702 } 00:15:41.702 ] 00:15:41.702 }' 00:15:41.702 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.702 10:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.268 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:42.269 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:42.269 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.269 10:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:42.528 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:42.528 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.528 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.528 [2024-07-26 10:25:55.409488] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.787 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:43.046 [2024-07-26 10:25:55.876735] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.046 [2024-07-26 10:25:55.876776] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1de42d0 name Existed_Raid, state offline 00:15:43.046 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:43.046 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:43.046 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.046 10:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:43.305 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.564 BaseBdev2 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.564 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.824 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.083 [ 00:15:44.083 { 00:15:44.083 "name": "BaseBdev2", 00:15:44.083 "aliases": [ 00:15:44.083 "1b122546-843b-483c-9e41-5ce05d75b5eb" 00:15:44.083 ], 00:15:44.083 "product_name": "Malloc disk", 00:15:44.083 "block_size": 512, 00:15:44.083 "num_blocks": 65536, 00:15:44.083 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:44.083 "assigned_rate_limits": { 00:15:44.083 "rw_ios_per_sec": 0, 00:15:44.083 "rw_mbytes_per_sec": 0, 00:15:44.083 "r_mbytes_per_sec": 0, 00:15:44.083 "w_mbytes_per_sec": 0 00:15:44.083 }, 00:15:44.083 "claimed": false, 00:15:44.083 "zoned": false, 00:15:44.083 "supported_io_types": { 00:15:44.083 "read": true, 00:15:44.083 "write": true, 00:15:44.083 "unmap": true, 00:15:44.083 "flush": true, 00:15:44.083 "reset": true, 00:15:44.083 "nvme_admin": false, 00:15:44.083 "nvme_io": false, 00:15:44.083 "nvme_io_md": false, 00:15:44.083 "write_zeroes": true, 00:15:44.083 "zcopy": true, 00:15:44.083 "get_zone_info": false, 00:15:44.083 "zone_management": false, 00:15:44.083 "zone_append": false, 00:15:44.083 "compare": false, 00:15:44.083 "compare_and_write": false, 00:15:44.083 "abort": true, 00:15:44.083 "seek_hole": false, 00:15:44.083 "seek_data": false, 00:15:44.083 "copy": true, 00:15:44.083 "nvme_iov_md": false 00:15:44.083 }, 00:15:44.083 "memory_domains": [ 00:15:44.083 { 00:15:44.083 "dma_device_id": "system", 00:15:44.083 "dma_device_type": 1 00:15:44.083 }, 00:15:44.083 { 00:15:44.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.083 "dma_device_type": 2 00:15:44.083 } 00:15:44.083 ], 00:15:44.083 "driver_specific": {} 00:15:44.083 } 00:15:44.083 ] 00:15:44.083 10:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.083 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:44.083 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:44.083 10:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.343 BaseBdev3 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.343 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.602 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.602 [ 00:15:44.602 { 00:15:44.602 "name": "BaseBdev3", 00:15:44.602 "aliases": [ 00:15:44.602 "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e" 00:15:44.602 ], 00:15:44.602 "product_name": "Malloc disk", 00:15:44.602 "block_size": 512, 00:15:44.602 "num_blocks": 65536, 00:15:44.602 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:44.602 "assigned_rate_limits": { 00:15:44.602 "rw_ios_per_sec": 0, 00:15:44.602 "rw_mbytes_per_sec": 0, 00:15:44.602 "r_mbytes_per_sec": 0, 00:15:44.602 "w_mbytes_per_sec": 0 00:15:44.602 }, 00:15:44.602 "claimed": false, 00:15:44.602 "zoned": false, 00:15:44.602 "supported_io_types": { 00:15:44.602 "read": true, 00:15:44.602 "write": true, 00:15:44.602 "unmap": true, 00:15:44.602 "flush": true, 00:15:44.602 "reset": true, 00:15:44.602 "nvme_admin": false, 00:15:44.602 "nvme_io": false, 00:15:44.602 "nvme_io_md": false, 00:15:44.602 "write_zeroes": true, 00:15:44.602 "zcopy": true, 00:15:44.602 "get_zone_info": false, 00:15:44.602 "zone_management": false, 00:15:44.602 "zone_append": false, 00:15:44.602 "compare": false, 00:15:44.602 "compare_and_write": false, 00:15:44.602 "abort": true, 00:15:44.602 "seek_hole": false, 00:15:44.602 "seek_data": false, 00:15:44.602 "copy": true, 00:15:44.602 "nvme_iov_md": false 00:15:44.602 }, 00:15:44.602 "memory_domains": [ 00:15:44.602 { 00:15:44.602 "dma_device_id": "system", 00:15:44.602 "dma_device_type": 1 00:15:44.602 }, 00:15:44.602 { 00:15:44.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.602 "dma_device_type": 2 00:15:44.602 } 00:15:44.602 ], 00:15:44.602 "driver_specific": {} 00:15:44.602 } 00:15:44.602 ] 00:15:44.602 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.602 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:44.602 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:44.602 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:44.862 [2024-07-26 10:25:57.667945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.862 [2024-07-26 10:25:57.667981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.862 [2024-07-26 10:25:57.667999] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.862 [2024-07-26 10:25:57.669214] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.862 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.121 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.121 "name": "Existed_Raid", 00:15:45.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.122 "strip_size_kb": 64, 00:15:45.122 "state": "configuring", 00:15:45.122 "raid_level": "raid0", 00:15:45.122 "superblock": false, 00:15:45.122 "num_base_bdevs": 3, 00:15:45.122 "num_base_bdevs_discovered": 2, 00:15:45.122 "num_base_bdevs_operational": 3, 00:15:45.122 "base_bdevs_list": [ 00:15:45.122 { 00:15:45.122 "name": "BaseBdev1", 00:15:45.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.122 "is_configured": false, 00:15:45.122 "data_offset": 0, 00:15:45.122 "data_size": 0 00:15:45.122 }, 00:15:45.122 { 00:15:45.122 "name": "BaseBdev2", 00:15:45.122 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:45.122 "is_configured": true, 00:15:45.122 "data_offset": 0, 00:15:45.122 "data_size": 65536 00:15:45.122 }, 00:15:45.122 { 00:15:45.122 "name": "BaseBdev3", 00:15:45.122 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:45.122 "is_configured": true, 00:15:45.122 "data_offset": 0, 00:15:45.122 "data_size": 65536 00:15:45.122 } 00:15:45.122 ] 00:15:45.122 }' 00:15:45.122 10:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.122 10:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.691 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:45.950 [2024-07-26 10:25:58.674580] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.950 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.209 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.209 "name": "Existed_Raid", 00:15:46.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.209 "strip_size_kb": 64, 00:15:46.209 "state": "configuring", 00:15:46.209 "raid_level": "raid0", 00:15:46.209 "superblock": false, 00:15:46.209 "num_base_bdevs": 3, 00:15:46.209 "num_base_bdevs_discovered": 1, 00:15:46.209 "num_base_bdevs_operational": 3, 00:15:46.209 "base_bdevs_list": [ 00:15:46.209 { 00:15:46.209 "name": "BaseBdev1", 00:15:46.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.209 "is_configured": false, 00:15:46.209 "data_offset": 0, 00:15:46.209 "data_size": 0 00:15:46.209 }, 00:15:46.209 { 00:15:46.209 "name": null, 00:15:46.209 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:46.209 "is_configured": false, 00:15:46.209 "data_offset": 0, 00:15:46.209 "data_size": 65536 00:15:46.209 }, 00:15:46.209 { 00:15:46.209 "name": "BaseBdev3", 00:15:46.209 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:46.209 "is_configured": true, 00:15:46.209 "data_offset": 0, 00:15:46.209 "data_size": 65536 00:15:46.209 } 00:15:46.209 ] 00:15:46.209 }' 00:15:46.209 10:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.209 10:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.778 10:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.778 10:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.037 10:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:47.037 10:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.296 [2024-07-26 10:25:59.948999] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.296 BaseBdev1 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.296 10:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.296 10:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.555 [ 00:15:47.555 { 00:15:47.555 "name": "BaseBdev1", 00:15:47.555 "aliases": [ 00:15:47.555 "8d14a859-7e5e-43dd-821b-c04748b800bb" 00:15:47.555 ], 00:15:47.555 "product_name": "Malloc disk", 00:15:47.555 "block_size": 512, 00:15:47.555 "num_blocks": 65536, 00:15:47.555 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:47.555 "assigned_rate_limits": { 00:15:47.555 "rw_ios_per_sec": 0, 00:15:47.555 "rw_mbytes_per_sec": 0, 00:15:47.555 "r_mbytes_per_sec": 0, 00:15:47.555 "w_mbytes_per_sec": 0 00:15:47.555 }, 00:15:47.555 "claimed": true, 00:15:47.555 "claim_type": "exclusive_write", 00:15:47.555 "zoned": false, 00:15:47.555 "supported_io_types": { 00:15:47.555 "read": true, 00:15:47.555 "write": true, 00:15:47.555 "unmap": true, 00:15:47.555 "flush": true, 00:15:47.555 "reset": true, 00:15:47.555 "nvme_admin": false, 00:15:47.555 "nvme_io": false, 00:15:47.555 "nvme_io_md": false, 00:15:47.555 "write_zeroes": true, 00:15:47.555 "zcopy": true, 00:15:47.555 "get_zone_info": false, 00:15:47.555 "zone_management": false, 00:15:47.555 "zone_append": false, 00:15:47.555 "compare": false, 00:15:47.555 "compare_and_write": false, 00:15:47.555 "abort": true, 00:15:47.555 "seek_hole": false, 00:15:47.555 "seek_data": false, 00:15:47.555 "copy": true, 00:15:47.555 "nvme_iov_md": false 00:15:47.555 }, 00:15:47.555 "memory_domains": [ 00:15:47.555 { 00:15:47.555 "dma_device_id": "system", 00:15:47.555 "dma_device_type": 1 00:15:47.555 }, 00:15:47.555 { 00:15:47.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.555 "dma_device_type": 2 00:15:47.555 } 00:15:47.555 ], 00:15:47.555 "driver_specific": {} 00:15:47.555 } 00:15:47.555 ] 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.555 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.814 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.814 "name": "Existed_Raid", 00:15:47.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.814 "strip_size_kb": 64, 00:15:47.814 "state": "configuring", 00:15:47.814 "raid_level": "raid0", 00:15:47.814 "superblock": false, 00:15:47.814 "num_base_bdevs": 3, 00:15:47.814 "num_base_bdevs_discovered": 2, 00:15:47.814 "num_base_bdevs_operational": 3, 00:15:47.814 "base_bdevs_list": [ 00:15:47.814 { 00:15:47.814 "name": "BaseBdev1", 00:15:47.814 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:47.815 "is_configured": true, 00:15:47.815 "data_offset": 0, 00:15:47.815 "data_size": 65536 00:15:47.815 }, 00:15:47.815 { 00:15:47.815 "name": null, 00:15:47.815 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:47.815 "is_configured": false, 00:15:47.815 "data_offset": 0, 00:15:47.815 "data_size": 65536 00:15:47.815 }, 00:15:47.815 { 00:15:47.815 "name": "BaseBdev3", 00:15:47.815 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:47.815 "is_configured": true, 00:15:47.815 "data_offset": 0, 00:15:47.815 "data_size": 65536 00:15:47.815 } 00:15:47.815 ] 00:15:47.815 }' 00:15:47.815 10:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.815 10:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.383 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.383 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.641 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:48.641 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:48.900 [2024-07-26 10:26:01.661533] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.900 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.159 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.159 "name": "Existed_Raid", 00:15:49.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.159 "strip_size_kb": 64, 00:15:49.159 "state": "configuring", 00:15:49.159 "raid_level": "raid0", 00:15:49.159 "superblock": false, 00:15:49.159 "num_base_bdevs": 3, 00:15:49.159 "num_base_bdevs_discovered": 1, 00:15:49.159 "num_base_bdevs_operational": 3, 00:15:49.159 "base_bdevs_list": [ 00:15:49.159 { 00:15:49.159 "name": "BaseBdev1", 00:15:49.159 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:49.159 "is_configured": true, 00:15:49.159 "data_offset": 0, 00:15:49.159 "data_size": 65536 00:15:49.159 }, 00:15:49.159 { 00:15:49.159 "name": null, 00:15:49.159 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:49.159 "is_configured": false, 00:15:49.159 "data_offset": 0, 00:15:49.159 "data_size": 65536 00:15:49.159 }, 00:15:49.159 { 00:15:49.159 "name": null, 00:15:49.159 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:49.159 "is_configured": false, 00:15:49.159 "data_offset": 0, 00:15:49.159 "data_size": 65536 00:15:49.159 } 00:15:49.159 ] 00:15:49.159 }' 00:15:49.159 10:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.159 10:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.727 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.727 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.987 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:49.987 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.246 [2024-07-26 10:26:02.908837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.246 10:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.504 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.504 "name": "Existed_Raid", 00:15:50.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.504 "strip_size_kb": 64, 00:15:50.504 "state": "configuring", 00:15:50.504 "raid_level": "raid0", 00:15:50.504 "superblock": false, 00:15:50.504 "num_base_bdevs": 3, 00:15:50.504 "num_base_bdevs_discovered": 2, 00:15:50.504 "num_base_bdevs_operational": 3, 00:15:50.504 "base_bdevs_list": [ 00:15:50.504 { 00:15:50.504 "name": "BaseBdev1", 00:15:50.504 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:50.504 "is_configured": true, 00:15:50.504 "data_offset": 0, 00:15:50.504 "data_size": 65536 00:15:50.504 }, 00:15:50.504 { 00:15:50.504 "name": null, 00:15:50.504 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:50.504 "is_configured": false, 00:15:50.504 "data_offset": 0, 00:15:50.504 "data_size": 65536 00:15:50.504 }, 00:15:50.504 { 00:15:50.504 "name": "BaseBdev3", 00:15:50.504 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:50.504 "is_configured": true, 00:15:50.504 "data_offset": 0, 00:15:50.504 "data_size": 65536 00:15:50.504 } 00:15:50.504 ] 00:15:50.504 }' 00:15:50.504 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.504 10:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.071 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.071 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.330 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:51.330 10:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:51.330 [2024-07-26 10:26:04.184217] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.330 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.589 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.589 "name": "Existed_Raid", 00:15:51.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.589 "strip_size_kb": 64, 00:15:51.589 "state": "configuring", 00:15:51.589 "raid_level": "raid0", 00:15:51.589 "superblock": false, 00:15:51.589 "num_base_bdevs": 3, 00:15:51.589 "num_base_bdevs_discovered": 1, 00:15:51.589 "num_base_bdevs_operational": 3, 00:15:51.589 "base_bdevs_list": [ 00:15:51.589 { 00:15:51.589 "name": null, 00:15:51.589 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:51.589 "is_configured": false, 00:15:51.589 "data_offset": 0, 00:15:51.589 "data_size": 65536 00:15:51.589 }, 00:15:51.589 { 00:15:51.589 "name": null, 00:15:51.589 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:51.589 "is_configured": false, 00:15:51.589 "data_offset": 0, 00:15:51.589 "data_size": 65536 00:15:51.589 }, 00:15:51.589 { 00:15:51.589 "name": "BaseBdev3", 00:15:51.589 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:51.589 "is_configured": true, 00:15:51.589 "data_offset": 0, 00:15:51.589 "data_size": 65536 00:15:51.589 } 00:15:51.589 ] 00:15:51.589 }' 00:15:51.589 10:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.589 10:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:52.155 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.412 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:52.412 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:52.670 [2024-07-26 10:26:05.445657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.670 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.928 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.928 "name": "Existed_Raid", 00:15:52.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.928 "strip_size_kb": 64, 00:15:52.928 "state": "configuring", 00:15:52.928 "raid_level": "raid0", 00:15:52.928 "superblock": false, 00:15:52.928 "num_base_bdevs": 3, 00:15:52.928 "num_base_bdevs_discovered": 2, 00:15:52.928 "num_base_bdevs_operational": 3, 00:15:52.928 "base_bdevs_list": [ 00:15:52.928 { 00:15:52.928 "name": null, 00:15:52.928 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:52.928 "is_configured": false, 00:15:52.928 "data_offset": 0, 00:15:52.928 "data_size": 65536 00:15:52.928 }, 00:15:52.928 { 00:15:52.928 "name": "BaseBdev2", 00:15:52.928 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:52.928 "is_configured": true, 00:15:52.928 "data_offset": 0, 00:15:52.928 "data_size": 65536 00:15:52.928 }, 00:15:52.928 { 00:15:52.928 "name": "BaseBdev3", 00:15:52.928 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:52.928 "is_configured": true, 00:15:52.928 "data_offset": 0, 00:15:52.928 "data_size": 65536 00:15:52.928 } 00:15:52.928 ] 00:15:52.928 }' 00:15:52.928 10:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.928 10:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.494 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.494 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.752 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:53.752 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.752 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:54.011 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8d14a859-7e5e-43dd-821b-c04748b800bb 00:15:54.269 [2024-07-26 10:26:06.916758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:54.269 [2024-07-26 10:26:06.916789] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c34f70 00:15:54.269 [2024-07-26 10:26:06.916797] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:54.269 [2024-07-26 10:26:06.916973] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c36450 00:15:54.269 [2024-07-26 10:26:06.917075] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c34f70 00:15:54.269 [2024-07-26 10:26:06.917084] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c34f70 00:15:54.269 [2024-07-26 10:26:06.917237] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.269 NewBaseBdev 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.269 10:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.269 10:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:54.528 [ 00:15:54.528 { 00:15:54.528 "name": "NewBaseBdev", 00:15:54.528 "aliases": [ 00:15:54.528 "8d14a859-7e5e-43dd-821b-c04748b800bb" 00:15:54.528 ], 00:15:54.528 "product_name": "Malloc disk", 00:15:54.528 "block_size": 512, 00:15:54.528 "num_blocks": 65536, 00:15:54.528 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:54.528 "assigned_rate_limits": { 00:15:54.528 "rw_ios_per_sec": 0, 00:15:54.528 "rw_mbytes_per_sec": 0, 00:15:54.528 "r_mbytes_per_sec": 0, 00:15:54.528 "w_mbytes_per_sec": 0 00:15:54.528 }, 00:15:54.528 "claimed": true, 00:15:54.528 "claim_type": "exclusive_write", 00:15:54.528 "zoned": false, 00:15:54.528 "supported_io_types": { 00:15:54.528 "read": true, 00:15:54.528 "write": true, 00:15:54.528 "unmap": true, 00:15:54.528 "flush": true, 00:15:54.528 "reset": true, 00:15:54.528 "nvme_admin": false, 00:15:54.528 "nvme_io": false, 00:15:54.528 "nvme_io_md": false, 00:15:54.528 "write_zeroes": true, 00:15:54.528 "zcopy": true, 00:15:54.528 "get_zone_info": false, 00:15:54.528 "zone_management": false, 00:15:54.528 "zone_append": false, 00:15:54.528 "compare": false, 00:15:54.528 "compare_and_write": false, 00:15:54.528 "abort": true, 00:15:54.528 "seek_hole": false, 00:15:54.528 "seek_data": false, 00:15:54.528 "copy": true, 00:15:54.528 "nvme_iov_md": false 00:15:54.528 }, 00:15:54.528 "memory_domains": [ 00:15:54.528 { 00:15:54.528 "dma_device_id": "system", 00:15:54.528 "dma_device_type": 1 00:15:54.528 }, 00:15:54.528 { 00:15:54.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.528 "dma_device_type": 2 00:15:54.528 } 00:15:54.528 ], 00:15:54.528 "driver_specific": {} 00:15:54.528 } 00:15:54.528 ] 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.528 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.787 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.787 "name": "Existed_Raid", 00:15:54.787 "uuid": "93cb78d9-8ea5-459e-b318-9b6c8925e4b2", 00:15:54.787 "strip_size_kb": 64, 00:15:54.787 "state": "online", 00:15:54.787 "raid_level": "raid0", 00:15:54.787 "superblock": false, 00:15:54.787 "num_base_bdevs": 3, 00:15:54.787 "num_base_bdevs_discovered": 3, 00:15:54.787 "num_base_bdevs_operational": 3, 00:15:54.787 "base_bdevs_list": [ 00:15:54.787 { 00:15:54.787 "name": "NewBaseBdev", 00:15:54.787 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:54.787 "is_configured": true, 00:15:54.787 "data_offset": 0, 00:15:54.787 "data_size": 65536 00:15:54.787 }, 00:15:54.787 { 00:15:54.787 "name": "BaseBdev2", 00:15:54.787 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:54.787 "is_configured": true, 00:15:54.787 "data_offset": 0, 00:15:54.787 "data_size": 65536 00:15:54.787 }, 00:15:54.787 { 00:15:54.787 "name": "BaseBdev3", 00:15:54.787 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:54.787 "is_configured": true, 00:15:54.787 "data_offset": 0, 00:15:54.787 "data_size": 65536 00:15:54.787 } 00:15:54.787 ] 00:15:54.787 }' 00:15:54.787 10:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.787 10:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:55.394 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:55.686 [2024-07-26 10:26:08.392939] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:55.686 "name": "Existed_Raid", 00:15:55.686 "aliases": [ 00:15:55.686 "93cb78d9-8ea5-459e-b318-9b6c8925e4b2" 00:15:55.686 ], 00:15:55.686 "product_name": "Raid Volume", 00:15:55.686 "block_size": 512, 00:15:55.686 "num_blocks": 196608, 00:15:55.686 "uuid": "93cb78d9-8ea5-459e-b318-9b6c8925e4b2", 00:15:55.686 "assigned_rate_limits": { 00:15:55.686 "rw_ios_per_sec": 0, 00:15:55.686 "rw_mbytes_per_sec": 0, 00:15:55.686 "r_mbytes_per_sec": 0, 00:15:55.686 "w_mbytes_per_sec": 0 00:15:55.686 }, 00:15:55.686 "claimed": false, 00:15:55.686 "zoned": false, 00:15:55.686 "supported_io_types": { 00:15:55.686 "read": true, 00:15:55.686 "write": true, 00:15:55.686 "unmap": true, 00:15:55.686 "flush": true, 00:15:55.686 "reset": true, 00:15:55.686 "nvme_admin": false, 00:15:55.686 "nvme_io": false, 00:15:55.686 "nvme_io_md": false, 00:15:55.686 "write_zeroes": true, 00:15:55.686 "zcopy": false, 00:15:55.686 "get_zone_info": false, 00:15:55.686 "zone_management": false, 00:15:55.686 "zone_append": false, 00:15:55.686 "compare": false, 00:15:55.686 "compare_and_write": false, 00:15:55.686 "abort": false, 00:15:55.686 "seek_hole": false, 00:15:55.686 "seek_data": false, 00:15:55.686 "copy": false, 00:15:55.686 "nvme_iov_md": false 00:15:55.686 }, 00:15:55.686 "memory_domains": [ 00:15:55.686 { 00:15:55.686 "dma_device_id": "system", 00:15:55.686 "dma_device_type": 1 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.686 "dma_device_type": 2 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "dma_device_id": "system", 00:15:55.686 "dma_device_type": 1 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.686 "dma_device_type": 2 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "dma_device_id": "system", 00:15:55.686 "dma_device_type": 1 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.686 "dma_device_type": 2 00:15:55.686 } 00:15:55.686 ], 00:15:55.686 "driver_specific": { 00:15:55.686 "raid": { 00:15:55.686 "uuid": "93cb78d9-8ea5-459e-b318-9b6c8925e4b2", 00:15:55.686 "strip_size_kb": 64, 00:15:55.686 "state": "online", 00:15:55.686 "raid_level": "raid0", 00:15:55.686 "superblock": false, 00:15:55.686 "num_base_bdevs": 3, 00:15:55.686 "num_base_bdevs_discovered": 3, 00:15:55.686 "num_base_bdevs_operational": 3, 00:15:55.686 "base_bdevs_list": [ 00:15:55.686 { 00:15:55.686 "name": "NewBaseBdev", 00:15:55.686 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:55.686 "is_configured": true, 00:15:55.686 "data_offset": 0, 00:15:55.686 "data_size": 65536 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "name": "BaseBdev2", 00:15:55.686 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:55.686 "is_configured": true, 00:15:55.686 "data_offset": 0, 00:15:55.686 "data_size": 65536 00:15:55.686 }, 00:15:55.686 { 00:15:55.686 "name": "BaseBdev3", 00:15:55.686 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:55.686 "is_configured": true, 00:15:55.686 "data_offset": 0, 00:15:55.686 "data_size": 65536 00:15:55.686 } 00:15:55.686 ] 00:15:55.686 } 00:15:55.686 } 00:15:55.686 }' 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:55.686 BaseBdev2 00:15:55.686 BaseBdev3' 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:55.686 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:55.945 "name": "NewBaseBdev", 00:15:55.945 "aliases": [ 00:15:55.945 "8d14a859-7e5e-43dd-821b-c04748b800bb" 00:15:55.945 ], 00:15:55.945 "product_name": "Malloc disk", 00:15:55.945 "block_size": 512, 00:15:55.945 "num_blocks": 65536, 00:15:55.945 "uuid": "8d14a859-7e5e-43dd-821b-c04748b800bb", 00:15:55.945 "assigned_rate_limits": { 00:15:55.945 "rw_ios_per_sec": 0, 00:15:55.945 "rw_mbytes_per_sec": 0, 00:15:55.945 "r_mbytes_per_sec": 0, 00:15:55.945 "w_mbytes_per_sec": 0 00:15:55.945 }, 00:15:55.945 "claimed": true, 00:15:55.945 "claim_type": "exclusive_write", 00:15:55.945 "zoned": false, 00:15:55.945 "supported_io_types": { 00:15:55.945 "read": true, 00:15:55.945 "write": true, 00:15:55.945 "unmap": true, 00:15:55.945 "flush": true, 00:15:55.945 "reset": true, 00:15:55.945 "nvme_admin": false, 00:15:55.945 "nvme_io": false, 00:15:55.945 "nvme_io_md": false, 00:15:55.945 "write_zeroes": true, 00:15:55.945 "zcopy": true, 00:15:55.945 "get_zone_info": false, 00:15:55.945 "zone_management": false, 00:15:55.945 "zone_append": false, 00:15:55.945 "compare": false, 00:15:55.945 "compare_and_write": false, 00:15:55.945 "abort": true, 00:15:55.945 "seek_hole": false, 00:15:55.945 "seek_data": false, 00:15:55.945 "copy": true, 00:15:55.945 "nvme_iov_md": false 00:15:55.945 }, 00:15:55.945 "memory_domains": [ 00:15:55.945 { 00:15:55.945 "dma_device_id": "system", 00:15:55.945 "dma_device_type": 1 00:15:55.945 }, 00:15:55.945 { 00:15:55.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.945 "dma_device_type": 2 00:15:55.945 } 00:15:55.945 ], 00:15:55.945 "driver_specific": {} 00:15:55.945 }' 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:55.945 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.204 10:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.204 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.204 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.204 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:56.204 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.462 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.462 "name": "BaseBdev2", 00:15:56.462 "aliases": [ 00:15:56.462 "1b122546-843b-483c-9e41-5ce05d75b5eb" 00:15:56.462 ], 00:15:56.462 "product_name": "Malloc disk", 00:15:56.462 "block_size": 512, 00:15:56.462 "num_blocks": 65536, 00:15:56.462 "uuid": "1b122546-843b-483c-9e41-5ce05d75b5eb", 00:15:56.462 "assigned_rate_limits": { 00:15:56.462 "rw_ios_per_sec": 0, 00:15:56.462 "rw_mbytes_per_sec": 0, 00:15:56.462 "r_mbytes_per_sec": 0, 00:15:56.462 "w_mbytes_per_sec": 0 00:15:56.462 }, 00:15:56.462 "claimed": true, 00:15:56.462 "claim_type": "exclusive_write", 00:15:56.462 "zoned": false, 00:15:56.462 "supported_io_types": { 00:15:56.462 "read": true, 00:15:56.462 "write": true, 00:15:56.462 "unmap": true, 00:15:56.462 "flush": true, 00:15:56.462 "reset": true, 00:15:56.462 "nvme_admin": false, 00:15:56.462 "nvme_io": false, 00:15:56.462 "nvme_io_md": false, 00:15:56.462 "write_zeroes": true, 00:15:56.462 "zcopy": true, 00:15:56.462 "get_zone_info": false, 00:15:56.462 "zone_management": false, 00:15:56.462 "zone_append": false, 00:15:56.462 "compare": false, 00:15:56.462 "compare_and_write": false, 00:15:56.462 "abort": true, 00:15:56.462 "seek_hole": false, 00:15:56.462 "seek_data": false, 00:15:56.462 "copy": true, 00:15:56.462 "nvme_iov_md": false 00:15:56.462 }, 00:15:56.462 "memory_domains": [ 00:15:56.462 { 00:15:56.462 "dma_device_id": "system", 00:15:56.462 "dma_device_type": 1 00:15:56.462 }, 00:15:56.462 { 00:15:56.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.462 "dma_device_type": 2 00:15:56.462 } 00:15:56.462 ], 00:15:56.462 "driver_specific": {} 00:15:56.462 }' 00:15:56.462 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.462 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.462 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.462 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.720 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.720 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.720 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.720 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:56.721 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.979 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.979 "name": "BaseBdev3", 00:15:56.979 "aliases": [ 00:15:56.979 "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e" 00:15:56.979 ], 00:15:56.979 "product_name": "Malloc disk", 00:15:56.979 "block_size": 512, 00:15:56.979 "num_blocks": 65536, 00:15:56.979 "uuid": "eba2eba1-f2b1-4c9f-80ac-c6cd9399652e", 00:15:56.979 "assigned_rate_limits": { 00:15:56.979 "rw_ios_per_sec": 0, 00:15:56.979 "rw_mbytes_per_sec": 0, 00:15:56.979 "r_mbytes_per_sec": 0, 00:15:56.979 "w_mbytes_per_sec": 0 00:15:56.979 }, 00:15:56.979 "claimed": true, 00:15:56.979 "claim_type": "exclusive_write", 00:15:56.979 "zoned": false, 00:15:56.979 "supported_io_types": { 00:15:56.979 "read": true, 00:15:56.979 "write": true, 00:15:56.979 "unmap": true, 00:15:56.979 "flush": true, 00:15:56.979 "reset": true, 00:15:56.979 "nvme_admin": false, 00:15:56.979 "nvme_io": false, 00:15:56.979 "nvme_io_md": false, 00:15:56.979 "write_zeroes": true, 00:15:56.979 "zcopy": true, 00:15:56.979 "get_zone_info": false, 00:15:56.979 "zone_management": false, 00:15:56.979 "zone_append": false, 00:15:56.979 "compare": false, 00:15:56.979 "compare_and_write": false, 00:15:56.979 "abort": true, 00:15:56.979 "seek_hole": false, 00:15:56.979 "seek_data": false, 00:15:56.979 "copy": true, 00:15:56.979 "nvme_iov_md": false 00:15:56.979 }, 00:15:56.979 "memory_domains": [ 00:15:56.979 { 00:15:56.979 "dma_device_id": "system", 00:15:56.980 "dma_device_type": 1 00:15:56.980 }, 00:15:56.980 { 00:15:56.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.980 "dma_device_type": 2 00:15:56.980 } 00:15:56.980 ], 00:15:56.980 "driver_specific": {} 00:15:56.980 }' 00:15:56.980 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.980 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.980 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.980 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.238 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.238 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.238 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.238 10:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.238 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.238 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.238 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.238 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.238 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.496 [2024-07-26 10:26:10.321764] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.496 [2024-07-26 10:26:10.321789] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.496 [2024-07-26 10:26:10.321846] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.496 [2024-07-26 10:26:10.321892] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.496 [2024-07-26 10:26:10.321903] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c34f70 name Existed_Raid, state offline 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3370054 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3370054 ']' 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3370054 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.496 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3370054 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3370054' 00:15:57.755 killing process with pid 3370054 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3370054 00:15:57.755 [2024-07-26 10:26:10.400511] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3370054 00:15:57.755 [2024-07-26 10:26:10.424085] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:57.755 00:15:57.755 real 0m26.741s 00:15:57.755 user 0m49.034s 00:15:57.755 sys 0m4.946s 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.755 10:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.755 ************************************ 00:15:57.755 END TEST raid_state_function_test 00:15:57.755 ************************************ 00:15:57.755 10:26:10 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:57.755 10:26:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:57.755 10:26:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.755 10:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 ************************************ 00:15:58.015 START TEST raid_state_function_test_sb 00:15:58.015 ************************************ 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3375172 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3375172' 00:15:58.015 Process raid pid: 3375172 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3375172 /var/tmp/spdk-raid.sock 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3375172 ']' 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.015 10:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.015 [2024-07-26 10:26:10.753635] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:15:58.015 [2024-07-26 10:26:10.753695] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:58.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:58.015 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:58.015 [2024-07-26 10:26:10.889976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.274 [2024-07-26 10:26:10.934908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.274 [2024-07-26 10:26:10.996189] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.274 [2024-07-26 10:26:10.996222] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.842 10:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.842 10:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:58.842 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:59.101 [2024-07-26 10:26:11.849463] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.101 [2024-07-26 10:26:11.849503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.101 [2024-07-26 10:26:11.849512] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.101 [2024-07-26 10:26:11.849523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.101 [2024-07-26 10:26:11.849535] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.101 [2024-07-26 10:26:11.849546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.101 10:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.360 10:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.360 "name": "Existed_Raid", 00:15:59.360 "uuid": "7e2560e6-743f-4149-9227-a4859c54c01d", 00:15:59.360 "strip_size_kb": 64, 00:15:59.360 "state": "configuring", 00:15:59.360 "raid_level": "raid0", 00:15:59.360 "superblock": true, 00:15:59.360 "num_base_bdevs": 3, 00:15:59.360 "num_base_bdevs_discovered": 0, 00:15:59.360 "num_base_bdevs_operational": 3, 00:15:59.360 "base_bdevs_list": [ 00:15:59.360 { 00:15:59.360 "name": "BaseBdev1", 00:15:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.360 "is_configured": false, 00:15:59.360 "data_offset": 0, 00:15:59.360 "data_size": 0 00:15:59.360 }, 00:15:59.360 { 00:15:59.360 "name": "BaseBdev2", 00:15:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.360 "is_configured": false, 00:15:59.360 "data_offset": 0, 00:15:59.360 "data_size": 0 00:15:59.360 }, 00:15:59.360 { 00:15:59.360 "name": "BaseBdev3", 00:15:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.360 "is_configured": false, 00:15:59.360 "data_offset": 0, 00:15:59.360 "data_size": 0 00:15:59.360 } 00:15:59.360 ] 00:15:59.360 }' 00:15:59.360 10:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.360 10:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.928 10:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.187 [2024-07-26 10:26:12.864001] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.188 [2024-07-26 10:26:12.864029] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1802b70 name Existed_Raid, state configuring 00:16:00.188 10:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:00.447 [2024-07-26 10:26:13.092639] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.447 [2024-07-26 10:26:13.092666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.447 [2024-07-26 10:26:13.092674] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.447 [2024-07-26 10:26:13.092685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.447 [2024-07-26 10:26:13.092693] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.447 [2024-07-26 10:26:13.092707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.447 [2024-07-26 10:26:13.330710] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.447 BaseBdev1 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.447 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.706 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.964 [ 00:16:00.964 { 00:16:00.964 "name": "BaseBdev1", 00:16:00.964 "aliases": [ 00:16:00.964 "b2844215-a12d-4692-8537-e86f58137bfe" 00:16:00.964 ], 00:16:00.964 "product_name": "Malloc disk", 00:16:00.964 "block_size": 512, 00:16:00.964 "num_blocks": 65536, 00:16:00.964 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:00.964 "assigned_rate_limits": { 00:16:00.964 "rw_ios_per_sec": 0, 00:16:00.964 "rw_mbytes_per_sec": 0, 00:16:00.964 "r_mbytes_per_sec": 0, 00:16:00.964 "w_mbytes_per_sec": 0 00:16:00.964 }, 00:16:00.964 "claimed": true, 00:16:00.964 "claim_type": "exclusive_write", 00:16:00.964 "zoned": false, 00:16:00.964 "supported_io_types": { 00:16:00.964 "read": true, 00:16:00.964 "write": true, 00:16:00.964 "unmap": true, 00:16:00.964 "flush": true, 00:16:00.964 "reset": true, 00:16:00.964 "nvme_admin": false, 00:16:00.964 "nvme_io": false, 00:16:00.964 "nvme_io_md": false, 00:16:00.964 "write_zeroes": true, 00:16:00.964 "zcopy": true, 00:16:00.964 "get_zone_info": false, 00:16:00.964 "zone_management": false, 00:16:00.964 "zone_append": false, 00:16:00.964 "compare": false, 00:16:00.964 "compare_and_write": false, 00:16:00.964 "abort": true, 00:16:00.964 "seek_hole": false, 00:16:00.964 "seek_data": false, 00:16:00.964 "copy": true, 00:16:00.964 "nvme_iov_md": false 00:16:00.964 }, 00:16:00.964 "memory_domains": [ 00:16:00.964 { 00:16:00.964 "dma_device_id": "system", 00:16:00.964 "dma_device_type": 1 00:16:00.964 }, 00:16:00.964 { 00:16:00.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.964 "dma_device_type": 2 00:16:00.964 } 00:16:00.964 ], 00:16:00.964 "driver_specific": {} 00:16:00.964 } 00:16:00.964 ] 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.964 10:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.223 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.223 "name": "Existed_Raid", 00:16:01.223 "uuid": "0fce0338-6567-44ad-b107-379bc31738a4", 00:16:01.223 "strip_size_kb": 64, 00:16:01.223 "state": "configuring", 00:16:01.223 "raid_level": "raid0", 00:16:01.223 "superblock": true, 00:16:01.223 "num_base_bdevs": 3, 00:16:01.223 "num_base_bdevs_discovered": 1, 00:16:01.223 "num_base_bdevs_operational": 3, 00:16:01.223 "base_bdevs_list": [ 00:16:01.223 { 00:16:01.223 "name": "BaseBdev1", 00:16:01.223 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:01.223 "is_configured": true, 00:16:01.223 "data_offset": 2048, 00:16:01.223 "data_size": 63488 00:16:01.223 }, 00:16:01.223 { 00:16:01.223 "name": "BaseBdev2", 00:16:01.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.223 "is_configured": false, 00:16:01.223 "data_offset": 0, 00:16:01.223 "data_size": 0 00:16:01.223 }, 00:16:01.223 { 00:16:01.223 "name": "BaseBdev3", 00:16:01.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.223 "is_configured": false, 00:16:01.223 "data_offset": 0, 00:16:01.223 "data_size": 0 00:16:01.223 } 00:16:01.223 ] 00:16:01.223 }' 00:16:01.223 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.223 10:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.789 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.049 [2024-07-26 10:26:14.814629] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.049 [2024-07-26 10:26:14.814664] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18024a0 name Existed_Raid, state configuring 00:16:02.049 10:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:02.308 [2024-07-26 10:26:15.039264] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.308 [2024-07-26 10:26:15.040593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.308 [2024-07-26 10:26:15.040623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.308 [2024-07-26 10:26:15.040633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.308 [2024-07-26 10:26:15.040644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.308 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.567 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.567 "name": "Existed_Raid", 00:16:02.567 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:02.567 "strip_size_kb": 64, 00:16:02.567 "state": "configuring", 00:16:02.567 "raid_level": "raid0", 00:16:02.567 "superblock": true, 00:16:02.567 "num_base_bdevs": 3, 00:16:02.567 "num_base_bdevs_discovered": 1, 00:16:02.567 "num_base_bdevs_operational": 3, 00:16:02.567 "base_bdevs_list": [ 00:16:02.567 { 00:16:02.567 "name": "BaseBdev1", 00:16:02.567 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:02.567 "is_configured": true, 00:16:02.567 "data_offset": 2048, 00:16:02.567 "data_size": 63488 00:16:02.567 }, 00:16:02.567 { 00:16:02.567 "name": "BaseBdev2", 00:16:02.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.567 "is_configured": false, 00:16:02.567 "data_offset": 0, 00:16:02.567 "data_size": 0 00:16:02.567 }, 00:16:02.567 { 00:16:02.567 "name": "BaseBdev3", 00:16:02.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.567 "is_configured": false, 00:16:02.567 "data_offset": 0, 00:16:02.567 "data_size": 0 00:16:02.568 } 00:16:02.568 ] 00:16:02.568 }' 00:16:02.568 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.568 10:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.136 10:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.395 [2024-07-26 10:26:16.061172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.395 BaseBdev2 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:03.395 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.654 [ 00:16:03.654 { 00:16:03.654 "name": "BaseBdev2", 00:16:03.654 "aliases": [ 00:16:03.654 "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f" 00:16:03.654 ], 00:16:03.654 "product_name": "Malloc disk", 00:16:03.654 "block_size": 512, 00:16:03.654 "num_blocks": 65536, 00:16:03.654 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:03.654 "assigned_rate_limits": { 00:16:03.654 "rw_ios_per_sec": 0, 00:16:03.654 "rw_mbytes_per_sec": 0, 00:16:03.654 "r_mbytes_per_sec": 0, 00:16:03.654 "w_mbytes_per_sec": 0 00:16:03.654 }, 00:16:03.654 "claimed": true, 00:16:03.654 "claim_type": "exclusive_write", 00:16:03.654 "zoned": false, 00:16:03.654 "supported_io_types": { 00:16:03.654 "read": true, 00:16:03.654 "write": true, 00:16:03.654 "unmap": true, 00:16:03.654 "flush": true, 00:16:03.654 "reset": true, 00:16:03.654 "nvme_admin": false, 00:16:03.654 "nvme_io": false, 00:16:03.654 "nvme_io_md": false, 00:16:03.654 "write_zeroes": true, 00:16:03.654 "zcopy": true, 00:16:03.654 "get_zone_info": false, 00:16:03.654 "zone_management": false, 00:16:03.654 "zone_append": false, 00:16:03.654 "compare": false, 00:16:03.654 "compare_and_write": false, 00:16:03.654 "abort": true, 00:16:03.654 "seek_hole": false, 00:16:03.654 "seek_data": false, 00:16:03.654 "copy": true, 00:16:03.654 "nvme_iov_md": false 00:16:03.654 }, 00:16:03.654 "memory_domains": [ 00:16:03.654 { 00:16:03.654 "dma_device_id": "system", 00:16:03.654 "dma_device_type": 1 00:16:03.654 }, 00:16:03.654 { 00:16:03.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.654 "dma_device_type": 2 00:16:03.654 } 00:16:03.654 ], 00:16:03.654 "driver_specific": {} 00:16:03.654 } 00:16:03.654 ] 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.654 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.914 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.914 "name": "Existed_Raid", 00:16:03.914 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:03.914 "strip_size_kb": 64, 00:16:03.914 "state": "configuring", 00:16:03.914 "raid_level": "raid0", 00:16:03.914 "superblock": true, 00:16:03.914 "num_base_bdevs": 3, 00:16:03.914 "num_base_bdevs_discovered": 2, 00:16:03.914 "num_base_bdevs_operational": 3, 00:16:03.914 "base_bdevs_list": [ 00:16:03.914 { 00:16:03.914 "name": "BaseBdev1", 00:16:03.914 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:03.914 "is_configured": true, 00:16:03.914 "data_offset": 2048, 00:16:03.914 "data_size": 63488 00:16:03.914 }, 00:16:03.914 { 00:16:03.914 "name": "BaseBdev2", 00:16:03.914 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:03.914 "is_configured": true, 00:16:03.914 "data_offset": 2048, 00:16:03.914 "data_size": 63488 00:16:03.914 }, 00:16:03.914 { 00:16:03.914 "name": "BaseBdev3", 00:16:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.914 "is_configured": false, 00:16:03.914 "data_offset": 0, 00:16:03.914 "data_size": 0 00:16:03.914 } 00:16:03.914 ] 00:16:03.914 }' 00:16:03.914 10:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.914 10:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.482 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.741 [2024-07-26 10:26:17.548335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.741 [2024-07-26 10:26:17.548480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x19b52d0 00:16:04.741 [2024-07-26 10:26:17.548492] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.741 [2024-07-26 10:26:17.548651] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1802a90 00:16:04.741 [2024-07-26 10:26:17.548757] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19b52d0 00:16:04.741 [2024-07-26 10:26:17.548766] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19b52d0 00:16:04.741 [2024-07-26 10:26:17.548849] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.741 BaseBdev3 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:04.741 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.999 10:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.256 [ 00:16:05.256 { 00:16:05.256 "name": "BaseBdev3", 00:16:05.256 "aliases": [ 00:16:05.256 "1acb2c4e-f0b3-4f96-b7a5-2296399db49a" 00:16:05.256 ], 00:16:05.256 "product_name": "Malloc disk", 00:16:05.256 "block_size": 512, 00:16:05.256 "num_blocks": 65536, 00:16:05.256 "uuid": "1acb2c4e-f0b3-4f96-b7a5-2296399db49a", 00:16:05.256 "assigned_rate_limits": { 00:16:05.256 "rw_ios_per_sec": 0, 00:16:05.256 "rw_mbytes_per_sec": 0, 00:16:05.256 "r_mbytes_per_sec": 0, 00:16:05.256 "w_mbytes_per_sec": 0 00:16:05.256 }, 00:16:05.256 "claimed": true, 00:16:05.256 "claim_type": "exclusive_write", 00:16:05.256 "zoned": false, 00:16:05.256 "supported_io_types": { 00:16:05.256 "read": true, 00:16:05.256 "write": true, 00:16:05.256 "unmap": true, 00:16:05.256 "flush": true, 00:16:05.256 "reset": true, 00:16:05.256 "nvme_admin": false, 00:16:05.256 "nvme_io": false, 00:16:05.256 "nvme_io_md": false, 00:16:05.256 "write_zeroes": true, 00:16:05.256 "zcopy": true, 00:16:05.256 "get_zone_info": false, 00:16:05.256 "zone_management": false, 00:16:05.256 "zone_append": false, 00:16:05.256 "compare": false, 00:16:05.256 "compare_and_write": false, 00:16:05.256 "abort": true, 00:16:05.256 "seek_hole": false, 00:16:05.256 "seek_data": false, 00:16:05.256 "copy": true, 00:16:05.256 "nvme_iov_md": false 00:16:05.256 }, 00:16:05.256 "memory_domains": [ 00:16:05.256 { 00:16:05.256 "dma_device_id": "system", 00:16:05.256 "dma_device_type": 1 00:16:05.256 }, 00:16:05.256 { 00:16:05.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.256 "dma_device_type": 2 00:16:05.256 } 00:16:05.256 ], 00:16:05.256 "driver_specific": {} 00:16:05.256 } 00:16:05.256 ] 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.256 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.257 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.257 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.257 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.515 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.515 "name": "Existed_Raid", 00:16:05.515 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:05.515 "strip_size_kb": 64, 00:16:05.515 "state": "online", 00:16:05.515 "raid_level": "raid0", 00:16:05.515 "superblock": true, 00:16:05.515 "num_base_bdevs": 3, 00:16:05.515 "num_base_bdevs_discovered": 3, 00:16:05.515 "num_base_bdevs_operational": 3, 00:16:05.515 "base_bdevs_list": [ 00:16:05.515 { 00:16:05.515 "name": "BaseBdev1", 00:16:05.515 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:05.515 "is_configured": true, 00:16:05.515 "data_offset": 2048, 00:16:05.515 "data_size": 63488 00:16:05.515 }, 00:16:05.515 { 00:16:05.515 "name": "BaseBdev2", 00:16:05.515 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:05.515 "is_configured": true, 00:16:05.515 "data_offset": 2048, 00:16:05.515 "data_size": 63488 00:16:05.515 }, 00:16:05.515 { 00:16:05.515 "name": "BaseBdev3", 00:16:05.515 "uuid": "1acb2c4e-f0b3-4f96-b7a5-2296399db49a", 00:16:05.515 "is_configured": true, 00:16:05.515 "data_offset": 2048, 00:16:05.515 "data_size": 63488 00:16:05.515 } 00:16:05.515 ] 00:16:05.515 }' 00:16:05.515 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.515 10:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:06.084 10:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:06.343 [2024-07-26 10:26:19.028510] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.343 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:06.343 "name": "Existed_Raid", 00:16:06.343 "aliases": [ 00:16:06.343 "f965089d-1dac-4152-9802-6026f005b0bd" 00:16:06.343 ], 00:16:06.343 "product_name": "Raid Volume", 00:16:06.343 "block_size": 512, 00:16:06.343 "num_blocks": 190464, 00:16:06.343 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:06.343 "assigned_rate_limits": { 00:16:06.343 "rw_ios_per_sec": 0, 00:16:06.343 "rw_mbytes_per_sec": 0, 00:16:06.343 "r_mbytes_per_sec": 0, 00:16:06.343 "w_mbytes_per_sec": 0 00:16:06.343 }, 00:16:06.343 "claimed": false, 00:16:06.343 "zoned": false, 00:16:06.343 "supported_io_types": { 00:16:06.343 "read": true, 00:16:06.343 "write": true, 00:16:06.343 "unmap": true, 00:16:06.343 "flush": true, 00:16:06.343 "reset": true, 00:16:06.343 "nvme_admin": false, 00:16:06.343 "nvme_io": false, 00:16:06.343 "nvme_io_md": false, 00:16:06.343 "write_zeroes": true, 00:16:06.343 "zcopy": false, 00:16:06.343 "get_zone_info": false, 00:16:06.343 "zone_management": false, 00:16:06.343 "zone_append": false, 00:16:06.343 "compare": false, 00:16:06.343 "compare_and_write": false, 00:16:06.343 "abort": false, 00:16:06.343 "seek_hole": false, 00:16:06.343 "seek_data": false, 00:16:06.343 "copy": false, 00:16:06.343 "nvme_iov_md": false 00:16:06.343 }, 00:16:06.343 "memory_domains": [ 00:16:06.343 { 00:16:06.343 "dma_device_id": "system", 00:16:06.343 "dma_device_type": 1 00:16:06.343 }, 00:16:06.343 { 00:16:06.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.343 "dma_device_type": 2 00:16:06.343 }, 00:16:06.343 { 00:16:06.343 "dma_device_id": "system", 00:16:06.343 "dma_device_type": 1 00:16:06.343 }, 00:16:06.343 { 00:16:06.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.343 "dma_device_type": 2 00:16:06.343 }, 00:16:06.343 { 00:16:06.343 "dma_device_id": "system", 00:16:06.343 "dma_device_type": 1 00:16:06.343 }, 00:16:06.343 { 00:16:06.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.343 "dma_device_type": 2 00:16:06.343 } 00:16:06.343 ], 00:16:06.343 "driver_specific": { 00:16:06.343 "raid": { 00:16:06.343 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:06.343 "strip_size_kb": 64, 00:16:06.343 "state": "online", 00:16:06.343 "raid_level": "raid0", 00:16:06.343 "superblock": true, 00:16:06.343 "num_base_bdevs": 3, 00:16:06.343 "num_base_bdevs_discovered": 3, 00:16:06.343 "num_base_bdevs_operational": 3, 00:16:06.343 "base_bdevs_list": [ 00:16:06.343 { 00:16:06.343 "name": "BaseBdev1", 00:16:06.343 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:06.343 "is_configured": true, 00:16:06.343 "data_offset": 2048, 00:16:06.343 "data_size": 63488 00:16:06.343 }, 00:16:06.343 { 00:16:06.344 "name": "BaseBdev2", 00:16:06.344 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:06.344 "is_configured": true, 00:16:06.344 "data_offset": 2048, 00:16:06.344 "data_size": 63488 00:16:06.344 }, 00:16:06.344 { 00:16:06.344 "name": "BaseBdev3", 00:16:06.344 "uuid": "1acb2c4e-f0b3-4f96-b7a5-2296399db49a", 00:16:06.344 "is_configured": true, 00:16:06.344 "data_offset": 2048, 00:16:06.344 "data_size": 63488 00:16:06.344 } 00:16:06.344 ] 00:16:06.344 } 00:16:06.344 } 00:16:06.344 }' 00:16:06.344 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.344 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:06.344 BaseBdev2 00:16:06.344 BaseBdev3' 00:16:06.344 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.344 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:06.344 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.603 "name": "BaseBdev1", 00:16:06.603 "aliases": [ 00:16:06.603 "b2844215-a12d-4692-8537-e86f58137bfe" 00:16:06.603 ], 00:16:06.603 "product_name": "Malloc disk", 00:16:06.603 "block_size": 512, 00:16:06.603 "num_blocks": 65536, 00:16:06.603 "uuid": "b2844215-a12d-4692-8537-e86f58137bfe", 00:16:06.603 "assigned_rate_limits": { 00:16:06.603 "rw_ios_per_sec": 0, 00:16:06.603 "rw_mbytes_per_sec": 0, 00:16:06.603 "r_mbytes_per_sec": 0, 00:16:06.603 "w_mbytes_per_sec": 0 00:16:06.603 }, 00:16:06.603 "claimed": true, 00:16:06.603 "claim_type": "exclusive_write", 00:16:06.603 "zoned": false, 00:16:06.603 "supported_io_types": { 00:16:06.603 "read": true, 00:16:06.603 "write": true, 00:16:06.603 "unmap": true, 00:16:06.603 "flush": true, 00:16:06.603 "reset": true, 00:16:06.603 "nvme_admin": false, 00:16:06.603 "nvme_io": false, 00:16:06.603 "nvme_io_md": false, 00:16:06.603 "write_zeroes": true, 00:16:06.603 "zcopy": true, 00:16:06.603 "get_zone_info": false, 00:16:06.603 "zone_management": false, 00:16:06.603 "zone_append": false, 00:16:06.603 "compare": false, 00:16:06.603 "compare_and_write": false, 00:16:06.603 "abort": true, 00:16:06.603 "seek_hole": false, 00:16:06.603 "seek_data": false, 00:16:06.603 "copy": true, 00:16:06.603 "nvme_iov_md": false 00:16:06.603 }, 00:16:06.603 "memory_domains": [ 00:16:06.603 { 00:16:06.603 "dma_device_id": "system", 00:16:06.603 "dma_device_type": 1 00:16:06.603 }, 00:16:06.603 { 00:16:06.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.603 "dma_device_type": 2 00:16:06.603 } 00:16:06.603 ], 00:16:06.603 "driver_specific": {} 00:16:06.603 }' 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.603 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:06.862 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.121 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.121 "name": "BaseBdev2", 00:16:07.121 "aliases": [ 00:16:07.121 "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f" 00:16:07.121 ], 00:16:07.121 "product_name": "Malloc disk", 00:16:07.121 "block_size": 512, 00:16:07.121 "num_blocks": 65536, 00:16:07.121 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:07.121 "assigned_rate_limits": { 00:16:07.121 "rw_ios_per_sec": 0, 00:16:07.121 "rw_mbytes_per_sec": 0, 00:16:07.121 "r_mbytes_per_sec": 0, 00:16:07.121 "w_mbytes_per_sec": 0 00:16:07.121 }, 00:16:07.121 "claimed": true, 00:16:07.121 "claim_type": "exclusive_write", 00:16:07.121 "zoned": false, 00:16:07.121 "supported_io_types": { 00:16:07.121 "read": true, 00:16:07.121 "write": true, 00:16:07.121 "unmap": true, 00:16:07.121 "flush": true, 00:16:07.121 "reset": true, 00:16:07.121 "nvme_admin": false, 00:16:07.121 "nvme_io": false, 00:16:07.121 "nvme_io_md": false, 00:16:07.121 "write_zeroes": true, 00:16:07.121 "zcopy": true, 00:16:07.121 "get_zone_info": false, 00:16:07.121 "zone_management": false, 00:16:07.121 "zone_append": false, 00:16:07.121 "compare": false, 00:16:07.121 "compare_and_write": false, 00:16:07.121 "abort": true, 00:16:07.121 "seek_hole": false, 00:16:07.121 "seek_data": false, 00:16:07.121 "copy": true, 00:16:07.121 "nvme_iov_md": false 00:16:07.121 }, 00:16:07.121 "memory_domains": [ 00:16:07.121 { 00:16:07.121 "dma_device_id": "system", 00:16:07.121 "dma_device_type": 1 00:16:07.121 }, 00:16:07.121 { 00:16:07.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.121 "dma_device_type": 2 00:16:07.121 } 00:16:07.121 ], 00:16:07.121 "driver_specific": {} 00:16:07.121 }' 00:16:07.121 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.121 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.122 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.122 10:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.122 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.381 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:07.640 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.640 "name": "BaseBdev3", 00:16:07.640 "aliases": [ 00:16:07.640 "1acb2c4e-f0b3-4f96-b7a5-2296399db49a" 00:16:07.640 ], 00:16:07.640 "product_name": "Malloc disk", 00:16:07.640 "block_size": 512, 00:16:07.640 "num_blocks": 65536, 00:16:07.640 "uuid": "1acb2c4e-f0b3-4f96-b7a5-2296399db49a", 00:16:07.640 "assigned_rate_limits": { 00:16:07.640 "rw_ios_per_sec": 0, 00:16:07.640 "rw_mbytes_per_sec": 0, 00:16:07.640 "r_mbytes_per_sec": 0, 00:16:07.640 "w_mbytes_per_sec": 0 00:16:07.640 }, 00:16:07.640 "claimed": true, 00:16:07.640 "claim_type": "exclusive_write", 00:16:07.640 "zoned": false, 00:16:07.640 "supported_io_types": { 00:16:07.640 "read": true, 00:16:07.640 "write": true, 00:16:07.640 "unmap": true, 00:16:07.640 "flush": true, 00:16:07.640 "reset": true, 00:16:07.640 "nvme_admin": false, 00:16:07.640 "nvme_io": false, 00:16:07.640 "nvme_io_md": false, 00:16:07.640 "write_zeroes": true, 00:16:07.640 "zcopy": true, 00:16:07.640 "get_zone_info": false, 00:16:07.640 "zone_management": false, 00:16:07.640 "zone_append": false, 00:16:07.640 "compare": false, 00:16:07.640 "compare_and_write": false, 00:16:07.640 "abort": true, 00:16:07.640 "seek_hole": false, 00:16:07.640 "seek_data": false, 00:16:07.640 "copy": true, 00:16:07.640 "nvme_iov_md": false 00:16:07.640 }, 00:16:07.640 "memory_domains": [ 00:16:07.640 { 00:16:07.640 "dma_device_id": "system", 00:16:07.640 "dma_device_type": 1 00:16:07.640 }, 00:16:07.640 { 00:16:07.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.640 "dma_device_type": 2 00:16:07.640 } 00:16:07.640 ], 00:16:07.640 "driver_specific": {} 00:16:07.640 }' 00:16:07.640 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.640 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.922 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.200 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.200 10:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.200 [2024-07-26 10:26:21.017511] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.200 [2024-07-26 10:26:21.017535] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.200 [2024-07-26 10:26:21.017574] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.200 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.459 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.459 "name": "Existed_Raid", 00:16:08.459 "uuid": "f965089d-1dac-4152-9802-6026f005b0bd", 00:16:08.459 "strip_size_kb": 64, 00:16:08.459 "state": "offline", 00:16:08.459 "raid_level": "raid0", 00:16:08.459 "superblock": true, 00:16:08.459 "num_base_bdevs": 3, 00:16:08.459 "num_base_bdevs_discovered": 2, 00:16:08.459 "num_base_bdevs_operational": 2, 00:16:08.459 "base_bdevs_list": [ 00:16:08.459 { 00:16:08.459 "name": null, 00:16:08.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.459 "is_configured": false, 00:16:08.459 "data_offset": 2048, 00:16:08.459 "data_size": 63488 00:16:08.459 }, 00:16:08.459 { 00:16:08.459 "name": "BaseBdev2", 00:16:08.459 "uuid": "45e8da5f-b1bc-46b3-b0d4-b524e8f9817f", 00:16:08.459 "is_configured": true, 00:16:08.459 "data_offset": 2048, 00:16:08.459 "data_size": 63488 00:16:08.459 }, 00:16:08.459 { 00:16:08.459 "name": "BaseBdev3", 00:16:08.459 "uuid": "1acb2c4e-f0b3-4f96-b7a5-2296399db49a", 00:16:08.459 "is_configured": true, 00:16:08.459 "data_offset": 2048, 00:16:08.459 "data_size": 63488 00:16:08.459 } 00:16:08.459 ] 00:16:08.459 }' 00:16:08.459 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.459 10:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.027 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:09.027 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.027 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.027 10:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.287 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:09.287 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.287 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:09.546 [2024-07-26 10:26:22.309927] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.546 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:09.546 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.546 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.546 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.805 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:09.805 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.805 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:10.064 [2024-07-26 10:26:22.777385] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.064 [2024-07-26 10:26:22.777423] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19b52d0 name Existed_Raid, state offline 00:16:10.064 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:10.064 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.064 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.064 10:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:10.323 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.582 BaseBdev2 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.582 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.841 [ 00:16:10.841 { 00:16:10.841 "name": "BaseBdev2", 00:16:10.841 "aliases": [ 00:16:10.841 "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6" 00:16:10.841 ], 00:16:10.841 "product_name": "Malloc disk", 00:16:10.841 "block_size": 512, 00:16:10.841 "num_blocks": 65536, 00:16:10.841 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:10.841 "assigned_rate_limits": { 00:16:10.841 "rw_ios_per_sec": 0, 00:16:10.841 "rw_mbytes_per_sec": 0, 00:16:10.841 "r_mbytes_per_sec": 0, 00:16:10.841 "w_mbytes_per_sec": 0 00:16:10.841 }, 00:16:10.841 "claimed": false, 00:16:10.841 "zoned": false, 00:16:10.841 "supported_io_types": { 00:16:10.841 "read": true, 00:16:10.841 "write": true, 00:16:10.841 "unmap": true, 00:16:10.841 "flush": true, 00:16:10.841 "reset": true, 00:16:10.841 "nvme_admin": false, 00:16:10.841 "nvme_io": false, 00:16:10.841 "nvme_io_md": false, 00:16:10.841 "write_zeroes": true, 00:16:10.841 "zcopy": true, 00:16:10.841 "get_zone_info": false, 00:16:10.841 "zone_management": false, 00:16:10.841 "zone_append": false, 00:16:10.841 "compare": false, 00:16:10.841 "compare_and_write": false, 00:16:10.841 "abort": true, 00:16:10.841 "seek_hole": false, 00:16:10.841 "seek_data": false, 00:16:10.841 "copy": true, 00:16:10.841 "nvme_iov_md": false 00:16:10.841 }, 00:16:10.841 "memory_domains": [ 00:16:10.841 { 00:16:10.841 "dma_device_id": "system", 00:16:10.841 "dma_device_type": 1 00:16:10.841 }, 00:16:10.841 { 00:16:10.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.841 "dma_device_type": 2 00:16:10.841 } 00:16:10.841 ], 00:16:10.841 "driver_specific": {} 00:16:10.841 } 00:16:10.841 ] 00:16:10.841 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:10.841 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:10.841 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:10.841 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.100 BaseBdev3 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.100 10:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.359 10:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:11.618 [ 00:16:11.618 { 00:16:11.618 "name": "BaseBdev3", 00:16:11.618 "aliases": [ 00:16:11.618 "38932738-01d8-457b-b259-83bc3eae8dd7" 00:16:11.618 ], 00:16:11.618 "product_name": "Malloc disk", 00:16:11.618 "block_size": 512, 00:16:11.618 "num_blocks": 65536, 00:16:11.618 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:11.618 "assigned_rate_limits": { 00:16:11.618 "rw_ios_per_sec": 0, 00:16:11.618 "rw_mbytes_per_sec": 0, 00:16:11.618 "r_mbytes_per_sec": 0, 00:16:11.618 "w_mbytes_per_sec": 0 00:16:11.618 }, 00:16:11.618 "claimed": false, 00:16:11.618 "zoned": false, 00:16:11.618 "supported_io_types": { 00:16:11.618 "read": true, 00:16:11.618 "write": true, 00:16:11.618 "unmap": true, 00:16:11.618 "flush": true, 00:16:11.618 "reset": true, 00:16:11.618 "nvme_admin": false, 00:16:11.618 "nvme_io": false, 00:16:11.618 "nvme_io_md": false, 00:16:11.618 "write_zeroes": true, 00:16:11.618 "zcopy": true, 00:16:11.618 "get_zone_info": false, 00:16:11.618 "zone_management": false, 00:16:11.618 "zone_append": false, 00:16:11.618 "compare": false, 00:16:11.618 "compare_and_write": false, 00:16:11.618 "abort": true, 00:16:11.618 "seek_hole": false, 00:16:11.618 "seek_data": false, 00:16:11.618 "copy": true, 00:16:11.618 "nvme_iov_md": false 00:16:11.618 }, 00:16:11.618 "memory_domains": [ 00:16:11.618 { 00:16:11.618 "dma_device_id": "system", 00:16:11.618 "dma_device_type": 1 00:16:11.618 }, 00:16:11.618 { 00:16:11.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.618 "dma_device_type": 2 00:16:11.618 } 00:16:11.618 ], 00:16:11.618 "driver_specific": {} 00:16:11.618 } 00:16:11.618 ] 00:16:11.618 10:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:11.618 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:11.618 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:11.618 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:11.878 [2024-07-26 10:26:24.584887] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.878 [2024-07-26 10:26:24.584921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.878 [2024-07-26 10:26:24.584942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.878 [2024-07-26 10:26:24.586132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.878 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.137 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.137 "name": "Existed_Raid", 00:16:12.137 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:12.137 "strip_size_kb": 64, 00:16:12.137 "state": "configuring", 00:16:12.137 "raid_level": "raid0", 00:16:12.137 "superblock": true, 00:16:12.137 "num_base_bdevs": 3, 00:16:12.137 "num_base_bdevs_discovered": 2, 00:16:12.137 "num_base_bdevs_operational": 3, 00:16:12.137 "base_bdevs_list": [ 00:16:12.137 { 00:16:12.137 "name": "BaseBdev1", 00:16:12.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.137 "is_configured": false, 00:16:12.137 "data_offset": 0, 00:16:12.137 "data_size": 0 00:16:12.137 }, 00:16:12.137 { 00:16:12.137 "name": "BaseBdev2", 00:16:12.137 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:12.137 "is_configured": true, 00:16:12.137 "data_offset": 2048, 00:16:12.137 "data_size": 63488 00:16:12.137 }, 00:16:12.137 { 00:16:12.137 "name": "BaseBdev3", 00:16:12.137 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:12.137 "is_configured": true, 00:16:12.137 "data_offset": 2048, 00:16:12.137 "data_size": 63488 00:16:12.137 } 00:16:12.137 ] 00:16:12.137 }' 00:16:12.137 10:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.137 10:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.705 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:12.705 [2024-07-26 10:26:25.591508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.964 "name": "Existed_Raid", 00:16:12.964 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:12.964 "strip_size_kb": 64, 00:16:12.964 "state": "configuring", 00:16:12.964 "raid_level": "raid0", 00:16:12.964 "superblock": true, 00:16:12.964 "num_base_bdevs": 3, 00:16:12.964 "num_base_bdevs_discovered": 1, 00:16:12.964 "num_base_bdevs_operational": 3, 00:16:12.964 "base_bdevs_list": [ 00:16:12.964 { 00:16:12.964 "name": "BaseBdev1", 00:16:12.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.964 "is_configured": false, 00:16:12.964 "data_offset": 0, 00:16:12.964 "data_size": 0 00:16:12.964 }, 00:16:12.964 { 00:16:12.964 "name": null, 00:16:12.964 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:12.964 "is_configured": false, 00:16:12.964 "data_offset": 2048, 00:16:12.964 "data_size": 63488 00:16:12.964 }, 00:16:12.964 { 00:16:12.964 "name": "BaseBdev3", 00:16:12.964 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:12.964 "is_configured": true, 00:16:12.964 "data_offset": 2048, 00:16:12.964 "data_size": 63488 00:16:12.964 } 00:16:12.964 ] 00:16:12.964 }' 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.964 10:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.533 10:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.533 10:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:13.792 10:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:13.792 10:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.051 [2024-07-26 10:26:26.862226] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.051 BaseBdev1 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.051 10:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.311 10:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.570 [ 00:16:14.570 { 00:16:14.570 "name": "BaseBdev1", 00:16:14.570 "aliases": [ 00:16:14.570 "9753580b-5b8a-4ca0-923f-418938f8897c" 00:16:14.570 ], 00:16:14.570 "product_name": "Malloc disk", 00:16:14.570 "block_size": 512, 00:16:14.570 "num_blocks": 65536, 00:16:14.570 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:14.570 "assigned_rate_limits": { 00:16:14.570 "rw_ios_per_sec": 0, 00:16:14.570 "rw_mbytes_per_sec": 0, 00:16:14.570 "r_mbytes_per_sec": 0, 00:16:14.570 "w_mbytes_per_sec": 0 00:16:14.570 }, 00:16:14.570 "claimed": true, 00:16:14.570 "claim_type": "exclusive_write", 00:16:14.570 "zoned": false, 00:16:14.570 "supported_io_types": { 00:16:14.570 "read": true, 00:16:14.570 "write": true, 00:16:14.570 "unmap": true, 00:16:14.570 "flush": true, 00:16:14.570 "reset": true, 00:16:14.570 "nvme_admin": false, 00:16:14.570 "nvme_io": false, 00:16:14.570 "nvme_io_md": false, 00:16:14.570 "write_zeroes": true, 00:16:14.570 "zcopy": true, 00:16:14.570 "get_zone_info": false, 00:16:14.570 "zone_management": false, 00:16:14.570 "zone_append": false, 00:16:14.570 "compare": false, 00:16:14.570 "compare_and_write": false, 00:16:14.570 "abort": true, 00:16:14.570 "seek_hole": false, 00:16:14.570 "seek_data": false, 00:16:14.571 "copy": true, 00:16:14.571 "nvme_iov_md": false 00:16:14.571 }, 00:16:14.571 "memory_domains": [ 00:16:14.571 { 00:16:14.571 "dma_device_id": "system", 00:16:14.571 "dma_device_type": 1 00:16:14.571 }, 00:16:14.571 { 00:16:14.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.571 "dma_device_type": 2 00:16:14.571 } 00:16:14.571 ], 00:16:14.571 "driver_specific": {} 00:16:14.571 } 00:16:14.571 ] 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.571 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.830 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.830 "name": "Existed_Raid", 00:16:14.830 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:14.830 "strip_size_kb": 64, 00:16:14.830 "state": "configuring", 00:16:14.830 "raid_level": "raid0", 00:16:14.830 "superblock": true, 00:16:14.830 "num_base_bdevs": 3, 00:16:14.830 "num_base_bdevs_discovered": 2, 00:16:14.830 "num_base_bdevs_operational": 3, 00:16:14.830 "base_bdevs_list": [ 00:16:14.830 { 00:16:14.830 "name": "BaseBdev1", 00:16:14.830 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:14.830 "is_configured": true, 00:16:14.830 "data_offset": 2048, 00:16:14.830 "data_size": 63488 00:16:14.830 }, 00:16:14.830 { 00:16:14.830 "name": null, 00:16:14.830 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:14.830 "is_configured": false, 00:16:14.830 "data_offset": 2048, 00:16:14.830 "data_size": 63488 00:16:14.830 }, 00:16:14.830 { 00:16:14.830 "name": "BaseBdev3", 00:16:14.830 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:14.830 "is_configured": true, 00:16:14.830 "data_offset": 2048, 00:16:14.830 "data_size": 63488 00:16:14.830 } 00:16:14.830 ] 00:16:14.830 }' 00:16:14.830 10:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.830 10:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.398 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.398 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.658 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:15.658 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:15.917 [2024-07-26 10:26:28.562752] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.917 "name": "Existed_Raid", 00:16:15.917 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:15.917 "strip_size_kb": 64, 00:16:15.917 "state": "configuring", 00:16:15.917 "raid_level": "raid0", 00:16:15.917 "superblock": true, 00:16:15.917 "num_base_bdevs": 3, 00:16:15.917 "num_base_bdevs_discovered": 1, 00:16:15.917 "num_base_bdevs_operational": 3, 00:16:15.917 "base_bdevs_list": [ 00:16:15.917 { 00:16:15.917 "name": "BaseBdev1", 00:16:15.917 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:15.917 "is_configured": true, 00:16:15.917 "data_offset": 2048, 00:16:15.917 "data_size": 63488 00:16:15.917 }, 00:16:15.917 { 00:16:15.917 "name": null, 00:16:15.917 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:15.917 "is_configured": false, 00:16:15.917 "data_offset": 2048, 00:16:15.917 "data_size": 63488 00:16:15.917 }, 00:16:15.917 { 00:16:15.917 "name": null, 00:16:15.917 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:15.917 "is_configured": false, 00:16:15.917 "data_offset": 2048, 00:16:15.917 "data_size": 63488 00:16:15.917 } 00:16:15.917 ] 00:16:15.917 }' 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.917 10:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.855 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.855 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:16.855 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:16.855 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:17.114 [2024-07-26 10:26:29.834262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.114 10:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.373 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.373 "name": "Existed_Raid", 00:16:17.373 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:17.373 "strip_size_kb": 64, 00:16:17.373 "state": "configuring", 00:16:17.373 "raid_level": "raid0", 00:16:17.373 "superblock": true, 00:16:17.373 "num_base_bdevs": 3, 00:16:17.373 "num_base_bdevs_discovered": 2, 00:16:17.373 "num_base_bdevs_operational": 3, 00:16:17.373 "base_bdevs_list": [ 00:16:17.373 { 00:16:17.373 "name": "BaseBdev1", 00:16:17.373 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:17.373 "is_configured": true, 00:16:17.373 "data_offset": 2048, 00:16:17.373 "data_size": 63488 00:16:17.373 }, 00:16:17.373 { 00:16:17.373 "name": null, 00:16:17.373 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:17.373 "is_configured": false, 00:16:17.373 "data_offset": 2048, 00:16:17.373 "data_size": 63488 00:16:17.373 }, 00:16:17.373 { 00:16:17.373 "name": "BaseBdev3", 00:16:17.373 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:17.373 "is_configured": true, 00:16:17.373 "data_offset": 2048, 00:16:17.373 "data_size": 63488 00:16:17.373 } 00:16:17.373 ] 00:16:17.373 }' 00:16:17.373 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.373 10:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.940 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.940 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.199 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:18.199 10:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:18.199 [2024-07-26 10:26:31.093594] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.458 "name": "Existed_Raid", 00:16:18.458 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:18.458 "strip_size_kb": 64, 00:16:18.458 "state": "configuring", 00:16:18.458 "raid_level": "raid0", 00:16:18.458 "superblock": true, 00:16:18.458 "num_base_bdevs": 3, 00:16:18.458 "num_base_bdevs_discovered": 1, 00:16:18.458 "num_base_bdevs_operational": 3, 00:16:18.458 "base_bdevs_list": [ 00:16:18.458 { 00:16:18.458 "name": null, 00:16:18.458 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:18.458 "is_configured": false, 00:16:18.458 "data_offset": 2048, 00:16:18.458 "data_size": 63488 00:16:18.458 }, 00:16:18.458 { 00:16:18.458 "name": null, 00:16:18.458 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:18.458 "is_configured": false, 00:16:18.458 "data_offset": 2048, 00:16:18.458 "data_size": 63488 00:16:18.458 }, 00:16:18.458 { 00:16:18.458 "name": "BaseBdev3", 00:16:18.458 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:18.458 "is_configured": true, 00:16:18.458 "data_offset": 2048, 00:16:18.458 "data_size": 63488 00:16:18.458 } 00:16:18.458 ] 00:16:18.458 }' 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.458 10:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.027 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.027 10:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:19.286 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:19.286 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:19.545 [2024-07-26 10:26:32.294920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.545 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.804 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.804 "name": "Existed_Raid", 00:16:19.804 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:19.804 "strip_size_kb": 64, 00:16:19.804 "state": "configuring", 00:16:19.804 "raid_level": "raid0", 00:16:19.804 "superblock": true, 00:16:19.804 "num_base_bdevs": 3, 00:16:19.804 "num_base_bdevs_discovered": 2, 00:16:19.804 "num_base_bdevs_operational": 3, 00:16:19.804 "base_bdevs_list": [ 00:16:19.804 { 00:16:19.804 "name": null, 00:16:19.804 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:19.804 "is_configured": false, 00:16:19.804 "data_offset": 2048, 00:16:19.804 "data_size": 63488 00:16:19.804 }, 00:16:19.804 { 00:16:19.804 "name": "BaseBdev2", 00:16:19.804 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:19.804 "is_configured": true, 00:16:19.804 "data_offset": 2048, 00:16:19.804 "data_size": 63488 00:16:19.804 }, 00:16:19.804 { 00:16:19.804 "name": "BaseBdev3", 00:16:19.804 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:19.804 "is_configured": true, 00:16:19.804 "data_offset": 2048, 00:16:19.804 "data_size": 63488 00:16:19.804 } 00:16:19.804 ] 00:16:19.804 }' 00:16:19.804 10:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.804 10:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.372 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.372 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.631 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:20.631 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.631 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9753580b-5b8a-4ca0-923f-418938f8897c 00:16:20.890 [2024-07-26 10:26:33.753939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:20.890 [2024-07-26 10:26:33.754071] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1803fb0 00:16:20.890 [2024-07-26 10:26:33.754083] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.890 [2024-07-26 10:26:33.754255] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17ef1d0 00:16:20.890 [2024-07-26 10:26:33.754354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1803fb0 00:16:20.890 [2024-07-26 10:26:33.754363] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1803fb0 00:16:20.890 [2024-07-26 10:26:33.754445] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.890 NewBaseBdev 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.890 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.152 10:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:21.453 [ 00:16:21.453 { 00:16:21.453 "name": "NewBaseBdev", 00:16:21.453 "aliases": [ 00:16:21.453 "9753580b-5b8a-4ca0-923f-418938f8897c" 00:16:21.453 ], 00:16:21.453 "product_name": "Malloc disk", 00:16:21.453 "block_size": 512, 00:16:21.453 "num_blocks": 65536, 00:16:21.453 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:21.453 "assigned_rate_limits": { 00:16:21.453 "rw_ios_per_sec": 0, 00:16:21.453 "rw_mbytes_per_sec": 0, 00:16:21.453 "r_mbytes_per_sec": 0, 00:16:21.453 "w_mbytes_per_sec": 0 00:16:21.453 }, 00:16:21.453 "claimed": true, 00:16:21.453 "claim_type": "exclusive_write", 00:16:21.453 "zoned": false, 00:16:21.453 "supported_io_types": { 00:16:21.453 "read": true, 00:16:21.453 "write": true, 00:16:21.453 "unmap": true, 00:16:21.453 "flush": true, 00:16:21.453 "reset": true, 00:16:21.453 "nvme_admin": false, 00:16:21.453 "nvme_io": false, 00:16:21.453 "nvme_io_md": false, 00:16:21.453 "write_zeroes": true, 00:16:21.453 "zcopy": true, 00:16:21.453 "get_zone_info": false, 00:16:21.453 "zone_management": false, 00:16:21.453 "zone_append": false, 00:16:21.453 "compare": false, 00:16:21.453 "compare_and_write": false, 00:16:21.453 "abort": true, 00:16:21.453 "seek_hole": false, 00:16:21.453 "seek_data": false, 00:16:21.453 "copy": true, 00:16:21.453 "nvme_iov_md": false 00:16:21.453 }, 00:16:21.453 "memory_domains": [ 00:16:21.453 { 00:16:21.453 "dma_device_id": "system", 00:16:21.453 "dma_device_type": 1 00:16:21.453 }, 00:16:21.453 { 00:16:21.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.453 "dma_device_type": 2 00:16:21.453 } 00:16:21.453 ], 00:16:21.453 "driver_specific": {} 00:16:21.453 } 00:16:21.453 ] 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.453 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.454 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.722 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.722 "name": "Existed_Raid", 00:16:21.722 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:21.722 "strip_size_kb": 64, 00:16:21.722 "state": "online", 00:16:21.722 "raid_level": "raid0", 00:16:21.722 "superblock": true, 00:16:21.722 "num_base_bdevs": 3, 00:16:21.722 "num_base_bdevs_discovered": 3, 00:16:21.722 "num_base_bdevs_operational": 3, 00:16:21.722 "base_bdevs_list": [ 00:16:21.722 { 00:16:21.722 "name": "NewBaseBdev", 00:16:21.722 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:21.722 "is_configured": true, 00:16:21.722 "data_offset": 2048, 00:16:21.722 "data_size": 63488 00:16:21.722 }, 00:16:21.722 { 00:16:21.722 "name": "BaseBdev2", 00:16:21.722 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:21.722 "is_configured": true, 00:16:21.722 "data_offset": 2048, 00:16:21.722 "data_size": 63488 00:16:21.722 }, 00:16:21.722 { 00:16:21.722 "name": "BaseBdev3", 00:16:21.722 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:21.722 "is_configured": true, 00:16:21.722 "data_offset": 2048, 00:16:21.722 "data_size": 63488 00:16:21.722 } 00:16:21.722 ] 00:16:21.722 }' 00:16:21.722 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.722 10:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:22.289 10:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:22.548 [2024-07-26 10:26:35.202039] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:22.548 "name": "Existed_Raid", 00:16:22.548 "aliases": [ 00:16:22.548 "c2f3f0a6-8679-4d96-9f05-19476de36c9d" 00:16:22.548 ], 00:16:22.548 "product_name": "Raid Volume", 00:16:22.548 "block_size": 512, 00:16:22.548 "num_blocks": 190464, 00:16:22.548 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:22.548 "assigned_rate_limits": { 00:16:22.548 "rw_ios_per_sec": 0, 00:16:22.548 "rw_mbytes_per_sec": 0, 00:16:22.548 "r_mbytes_per_sec": 0, 00:16:22.548 "w_mbytes_per_sec": 0 00:16:22.548 }, 00:16:22.548 "claimed": false, 00:16:22.548 "zoned": false, 00:16:22.548 "supported_io_types": { 00:16:22.548 "read": true, 00:16:22.548 "write": true, 00:16:22.548 "unmap": true, 00:16:22.548 "flush": true, 00:16:22.548 "reset": true, 00:16:22.548 "nvme_admin": false, 00:16:22.548 "nvme_io": false, 00:16:22.548 "nvme_io_md": false, 00:16:22.548 "write_zeroes": true, 00:16:22.548 "zcopy": false, 00:16:22.548 "get_zone_info": false, 00:16:22.548 "zone_management": false, 00:16:22.548 "zone_append": false, 00:16:22.548 "compare": false, 00:16:22.548 "compare_and_write": false, 00:16:22.548 "abort": false, 00:16:22.548 "seek_hole": false, 00:16:22.548 "seek_data": false, 00:16:22.548 "copy": false, 00:16:22.548 "nvme_iov_md": false 00:16:22.548 }, 00:16:22.548 "memory_domains": [ 00:16:22.548 { 00:16:22.548 "dma_device_id": "system", 00:16:22.548 "dma_device_type": 1 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.548 "dma_device_type": 2 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "dma_device_id": "system", 00:16:22.548 "dma_device_type": 1 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.548 "dma_device_type": 2 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "dma_device_id": "system", 00:16:22.548 "dma_device_type": 1 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.548 "dma_device_type": 2 00:16:22.548 } 00:16:22.548 ], 00:16:22.548 "driver_specific": { 00:16:22.548 "raid": { 00:16:22.548 "uuid": "c2f3f0a6-8679-4d96-9f05-19476de36c9d", 00:16:22.548 "strip_size_kb": 64, 00:16:22.548 "state": "online", 00:16:22.548 "raid_level": "raid0", 00:16:22.548 "superblock": true, 00:16:22.548 "num_base_bdevs": 3, 00:16:22.548 "num_base_bdevs_discovered": 3, 00:16:22.548 "num_base_bdevs_operational": 3, 00:16:22.548 "base_bdevs_list": [ 00:16:22.548 { 00:16:22.548 "name": "NewBaseBdev", 00:16:22.548 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:22.548 "is_configured": true, 00:16:22.548 "data_offset": 2048, 00:16:22.548 "data_size": 63488 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "name": "BaseBdev2", 00:16:22.548 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:22.548 "is_configured": true, 00:16:22.548 "data_offset": 2048, 00:16:22.548 "data_size": 63488 00:16:22.548 }, 00:16:22.548 { 00:16:22.548 "name": "BaseBdev3", 00:16:22.548 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:22.548 "is_configured": true, 00:16:22.548 "data_offset": 2048, 00:16:22.548 "data_size": 63488 00:16:22.548 } 00:16:22.548 ] 00:16:22.548 } 00:16:22.548 } 00:16:22.548 }' 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:22.548 BaseBdev2 00:16:22.548 BaseBdev3' 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:22.548 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.807 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.807 "name": "NewBaseBdev", 00:16:22.807 "aliases": [ 00:16:22.807 "9753580b-5b8a-4ca0-923f-418938f8897c" 00:16:22.807 ], 00:16:22.807 "product_name": "Malloc disk", 00:16:22.807 "block_size": 512, 00:16:22.807 "num_blocks": 65536, 00:16:22.807 "uuid": "9753580b-5b8a-4ca0-923f-418938f8897c", 00:16:22.807 "assigned_rate_limits": { 00:16:22.807 "rw_ios_per_sec": 0, 00:16:22.807 "rw_mbytes_per_sec": 0, 00:16:22.807 "r_mbytes_per_sec": 0, 00:16:22.807 "w_mbytes_per_sec": 0 00:16:22.807 }, 00:16:22.807 "claimed": true, 00:16:22.807 "claim_type": "exclusive_write", 00:16:22.807 "zoned": false, 00:16:22.807 "supported_io_types": { 00:16:22.807 "read": true, 00:16:22.807 "write": true, 00:16:22.807 "unmap": true, 00:16:22.807 "flush": true, 00:16:22.807 "reset": true, 00:16:22.807 "nvme_admin": false, 00:16:22.807 "nvme_io": false, 00:16:22.807 "nvme_io_md": false, 00:16:22.807 "write_zeroes": true, 00:16:22.807 "zcopy": true, 00:16:22.807 "get_zone_info": false, 00:16:22.807 "zone_management": false, 00:16:22.807 "zone_append": false, 00:16:22.807 "compare": false, 00:16:22.807 "compare_and_write": false, 00:16:22.807 "abort": true, 00:16:22.807 "seek_hole": false, 00:16:22.807 "seek_data": false, 00:16:22.807 "copy": true, 00:16:22.807 "nvme_iov_md": false 00:16:22.807 }, 00:16:22.807 "memory_domains": [ 00:16:22.807 { 00:16:22.807 "dma_device_id": "system", 00:16:22.807 "dma_device_type": 1 00:16:22.807 }, 00:16:22.807 { 00:16:22.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.807 "dma_device_type": 2 00:16:22.807 } 00:16:22.807 ], 00:16:22.807 "driver_specific": {} 00:16:22.807 }' 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.808 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:23.067 10:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.325 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.325 "name": "BaseBdev2", 00:16:23.325 "aliases": [ 00:16:23.325 "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6" 00:16:23.325 ], 00:16:23.325 "product_name": "Malloc disk", 00:16:23.325 "block_size": 512, 00:16:23.325 "num_blocks": 65536, 00:16:23.325 "uuid": "84e2bcdd-e905-41a1-b4e1-975c0c65ffc6", 00:16:23.325 "assigned_rate_limits": { 00:16:23.325 "rw_ios_per_sec": 0, 00:16:23.325 "rw_mbytes_per_sec": 0, 00:16:23.325 "r_mbytes_per_sec": 0, 00:16:23.325 "w_mbytes_per_sec": 0 00:16:23.326 }, 00:16:23.326 "claimed": true, 00:16:23.326 "claim_type": "exclusive_write", 00:16:23.326 "zoned": false, 00:16:23.326 "supported_io_types": { 00:16:23.326 "read": true, 00:16:23.326 "write": true, 00:16:23.326 "unmap": true, 00:16:23.326 "flush": true, 00:16:23.326 "reset": true, 00:16:23.326 "nvme_admin": false, 00:16:23.326 "nvme_io": false, 00:16:23.326 "nvme_io_md": false, 00:16:23.326 "write_zeroes": true, 00:16:23.326 "zcopy": true, 00:16:23.326 "get_zone_info": false, 00:16:23.326 "zone_management": false, 00:16:23.326 "zone_append": false, 00:16:23.326 "compare": false, 00:16:23.326 "compare_and_write": false, 00:16:23.326 "abort": true, 00:16:23.326 "seek_hole": false, 00:16:23.326 "seek_data": false, 00:16:23.326 "copy": true, 00:16:23.326 "nvme_iov_md": false 00:16:23.326 }, 00:16:23.326 "memory_domains": [ 00:16:23.326 { 00:16:23.326 "dma_device_id": "system", 00:16:23.326 "dma_device_type": 1 00:16:23.326 }, 00:16:23.326 { 00:16:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.326 "dma_device_type": 2 00:16:23.326 } 00:16:23.326 ], 00:16:23.326 "driver_specific": {} 00:16:23.326 }' 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.326 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.584 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:23.585 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.843 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.843 "name": "BaseBdev3", 00:16:23.843 "aliases": [ 00:16:23.843 "38932738-01d8-457b-b259-83bc3eae8dd7" 00:16:23.843 ], 00:16:23.843 "product_name": "Malloc disk", 00:16:23.843 "block_size": 512, 00:16:23.843 "num_blocks": 65536, 00:16:23.843 "uuid": "38932738-01d8-457b-b259-83bc3eae8dd7", 00:16:23.843 "assigned_rate_limits": { 00:16:23.843 "rw_ios_per_sec": 0, 00:16:23.843 "rw_mbytes_per_sec": 0, 00:16:23.843 "r_mbytes_per_sec": 0, 00:16:23.843 "w_mbytes_per_sec": 0 00:16:23.843 }, 00:16:23.843 "claimed": true, 00:16:23.843 "claim_type": "exclusive_write", 00:16:23.843 "zoned": false, 00:16:23.843 "supported_io_types": { 00:16:23.843 "read": true, 00:16:23.843 "write": true, 00:16:23.843 "unmap": true, 00:16:23.843 "flush": true, 00:16:23.843 "reset": true, 00:16:23.843 "nvme_admin": false, 00:16:23.843 "nvme_io": false, 00:16:23.843 "nvme_io_md": false, 00:16:23.843 "write_zeroes": true, 00:16:23.843 "zcopy": true, 00:16:23.843 "get_zone_info": false, 00:16:23.843 "zone_management": false, 00:16:23.843 "zone_append": false, 00:16:23.843 "compare": false, 00:16:23.843 "compare_and_write": false, 00:16:23.843 "abort": true, 00:16:23.843 "seek_hole": false, 00:16:23.843 "seek_data": false, 00:16:23.843 "copy": true, 00:16:23.843 "nvme_iov_md": false 00:16:23.843 }, 00:16:23.843 "memory_domains": [ 00:16:23.843 { 00:16:23.843 "dma_device_id": "system", 00:16:23.843 "dma_device_type": 1 00:16:23.843 }, 00:16:23.843 { 00:16:23.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.843 "dma_device_type": 2 00:16:23.843 } 00:16:23.843 ], 00:16:23.843 "driver_specific": {} 00:16:23.843 }' 00:16:23.843 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.843 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.843 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.843 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.102 10:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.362 [2024-07-26 10:26:37.166955] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.362 [2024-07-26 10:26:37.166977] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.362 [2024-07-26 10:26:37.167022] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.362 [2024-07-26 10:26:37.167070] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.362 [2024-07-26 10:26:37.167080] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1803fb0 name Existed_Raid, state offline 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3375172 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3375172 ']' 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3375172 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3375172 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3375172' 00:16:24.362 killing process with pid 3375172 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3375172 00:16:24.362 [2024-07-26 10:26:37.245444] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.362 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3375172 00:16:24.620 [2024-07-26 10:26:37.269056] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.620 10:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:24.620 00:16:24.620 real 0m26.756s 00:16:24.620 user 0m48.997s 00:16:24.620 sys 0m4.869s 00:16:24.620 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.620 10:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.620 ************************************ 00:16:24.620 END TEST raid_state_function_test_sb 00:16:24.620 ************************************ 00:16:24.621 10:26:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:24.621 10:26:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:24.621 10:26:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.621 10:26:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.621 ************************************ 00:16:24.621 START TEST raid_superblock_test 00:16:24.621 ************************************ 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3380292 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3380292 /var/tmp/spdk-raid.sock 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3380292 ']' 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:24.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.880 10:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.880 [2024-07-26 10:26:37.585364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:24.880 [2024-07-26 10:26:37.585420] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380292 ] 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:24.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:24.880 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:24.880 [2024-07-26 10:26:37.717800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.880 [2024-07-26 10:26:37.762643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.140 [2024-07-26 10:26:37.827405] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.140 [2024-07-26 10:26:37.827450] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.707 10:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.707 10:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:25.708 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:25.966 malloc1 00:16:25.967 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.226 [2024-07-26 10:26:38.922567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.226 [2024-07-26 10:26:38.922611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.226 [2024-07-26 10:26:38.922630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1678270 00:16:26.226 [2024-07-26 10:26:38.922641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.226 [2024-07-26 10:26:38.924061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.226 [2024-07-26 10:26:38.924088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.226 pt1 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.226 10:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:26.485 malloc2 00:16:26.485 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.485 [2024-07-26 10:26:39.376195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.485 [2024-07-26 10:26:39.376233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.485 [2024-07-26 10:26:39.376249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16342f0 00:16:26.485 [2024-07-26 10:26:39.376260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.485 [2024-07-26 10:26:39.377675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.485 [2024-07-26 10:26:39.377702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.485 pt2 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:26.744 malloc3 00:16:26.744 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:27.003 [2024-07-26 10:26:39.825644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:27.003 [2024-07-26 10:26:39.825685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.003 [2024-07-26 10:26:39.825701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15fe650 00:16:27.003 [2024-07-26 10:26:39.825713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.003 [2024-07-26 10:26:39.827176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.003 [2024-07-26 10:26:39.827202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:27.003 pt3 00:16:27.003 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:27.003 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:27.003 10:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:27.262 [2024-07-26 10:26:40.046258] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.262 [2024-07-26 10:26:40.047441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.262 [2024-07-26 10:26:40.047493] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:27.262 [2024-07-26 10:26:40.047617] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15ffd00 00:16:27.262 [2024-07-26 10:26:40.047627] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:27.262 [2024-07-26 10:26:40.047810] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14dd320 00:16:27.262 [2024-07-26 10:26:40.047930] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15ffd00 00:16:27.262 [2024-07-26 10:26:40.047939] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15ffd00 00:16:27.262 [2024-07-26 10:26:40.048041] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.262 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.521 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.521 "name": "raid_bdev1", 00:16:27.521 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:27.521 "strip_size_kb": 64, 00:16:27.521 "state": "online", 00:16:27.521 "raid_level": "raid0", 00:16:27.521 "superblock": true, 00:16:27.521 "num_base_bdevs": 3, 00:16:27.521 "num_base_bdevs_discovered": 3, 00:16:27.521 "num_base_bdevs_operational": 3, 00:16:27.521 "base_bdevs_list": [ 00:16:27.521 { 00:16:27.521 "name": "pt1", 00:16:27.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.521 "is_configured": true, 00:16:27.521 "data_offset": 2048, 00:16:27.521 "data_size": 63488 00:16:27.521 }, 00:16:27.521 { 00:16:27.521 "name": "pt2", 00:16:27.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.521 "is_configured": true, 00:16:27.521 "data_offset": 2048, 00:16:27.521 "data_size": 63488 00:16:27.521 }, 00:16:27.521 { 00:16:27.521 "name": "pt3", 00:16:27.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.521 "is_configured": true, 00:16:27.521 "data_offset": 2048, 00:16:27.521 "data_size": 63488 00:16:27.521 } 00:16:27.521 ] 00:16:27.521 }' 00:16:27.521 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.521 10:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:28.089 10:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.348 [2024-07-26 10:26:41.061145] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.348 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.348 "name": "raid_bdev1", 00:16:28.348 "aliases": [ 00:16:28.348 "af1896f7-9655-4121-8cd8-33d0ea55021a" 00:16:28.348 ], 00:16:28.348 "product_name": "Raid Volume", 00:16:28.348 "block_size": 512, 00:16:28.348 "num_blocks": 190464, 00:16:28.348 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:28.348 "assigned_rate_limits": { 00:16:28.348 "rw_ios_per_sec": 0, 00:16:28.348 "rw_mbytes_per_sec": 0, 00:16:28.348 "r_mbytes_per_sec": 0, 00:16:28.348 "w_mbytes_per_sec": 0 00:16:28.348 }, 00:16:28.348 "claimed": false, 00:16:28.348 "zoned": false, 00:16:28.348 "supported_io_types": { 00:16:28.348 "read": true, 00:16:28.348 "write": true, 00:16:28.348 "unmap": true, 00:16:28.348 "flush": true, 00:16:28.348 "reset": true, 00:16:28.348 "nvme_admin": false, 00:16:28.348 "nvme_io": false, 00:16:28.348 "nvme_io_md": false, 00:16:28.348 "write_zeroes": true, 00:16:28.348 "zcopy": false, 00:16:28.348 "get_zone_info": false, 00:16:28.348 "zone_management": false, 00:16:28.348 "zone_append": false, 00:16:28.348 "compare": false, 00:16:28.348 "compare_and_write": false, 00:16:28.348 "abort": false, 00:16:28.348 "seek_hole": false, 00:16:28.348 "seek_data": false, 00:16:28.348 "copy": false, 00:16:28.348 "nvme_iov_md": false 00:16:28.348 }, 00:16:28.348 "memory_domains": [ 00:16:28.348 { 00:16:28.348 "dma_device_id": "system", 00:16:28.348 "dma_device_type": 1 00:16:28.348 }, 00:16:28.348 { 00:16:28.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.348 "dma_device_type": 2 00:16:28.348 }, 00:16:28.348 { 00:16:28.348 "dma_device_id": "system", 00:16:28.348 "dma_device_type": 1 00:16:28.348 }, 00:16:28.348 { 00:16:28.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.348 "dma_device_type": 2 00:16:28.348 }, 00:16:28.348 { 00:16:28.348 "dma_device_id": "system", 00:16:28.348 "dma_device_type": 1 00:16:28.348 }, 00:16:28.349 { 00:16:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.349 "dma_device_type": 2 00:16:28.349 } 00:16:28.349 ], 00:16:28.349 "driver_specific": { 00:16:28.349 "raid": { 00:16:28.349 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:28.349 "strip_size_kb": 64, 00:16:28.349 "state": "online", 00:16:28.349 "raid_level": "raid0", 00:16:28.349 "superblock": true, 00:16:28.349 "num_base_bdevs": 3, 00:16:28.349 "num_base_bdevs_discovered": 3, 00:16:28.349 "num_base_bdevs_operational": 3, 00:16:28.349 "base_bdevs_list": [ 00:16:28.349 { 00:16:28.349 "name": "pt1", 00:16:28.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.349 "is_configured": true, 00:16:28.349 "data_offset": 2048, 00:16:28.349 "data_size": 63488 00:16:28.349 }, 00:16:28.349 { 00:16:28.349 "name": "pt2", 00:16:28.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.349 "is_configured": true, 00:16:28.349 "data_offset": 2048, 00:16:28.349 "data_size": 63488 00:16:28.349 }, 00:16:28.349 { 00:16:28.349 "name": "pt3", 00:16:28.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.349 "is_configured": true, 00:16:28.349 "data_offset": 2048, 00:16:28.349 "data_size": 63488 00:16:28.349 } 00:16:28.349 ] 00:16:28.349 } 00:16:28.349 } 00:16:28.349 }' 00:16:28.349 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.349 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:28.349 pt2 00:16:28.349 pt3' 00:16:28.349 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.349 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:28.349 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.608 "name": "pt1", 00:16:28.608 "aliases": [ 00:16:28.608 "00000000-0000-0000-0000-000000000001" 00:16:28.608 ], 00:16:28.608 "product_name": "passthru", 00:16:28.608 "block_size": 512, 00:16:28.608 "num_blocks": 65536, 00:16:28.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.608 "assigned_rate_limits": { 00:16:28.608 "rw_ios_per_sec": 0, 00:16:28.608 "rw_mbytes_per_sec": 0, 00:16:28.608 "r_mbytes_per_sec": 0, 00:16:28.608 "w_mbytes_per_sec": 0 00:16:28.608 }, 00:16:28.608 "claimed": true, 00:16:28.608 "claim_type": "exclusive_write", 00:16:28.608 "zoned": false, 00:16:28.608 "supported_io_types": { 00:16:28.608 "read": true, 00:16:28.608 "write": true, 00:16:28.608 "unmap": true, 00:16:28.608 "flush": true, 00:16:28.608 "reset": true, 00:16:28.608 "nvme_admin": false, 00:16:28.608 "nvme_io": false, 00:16:28.608 "nvme_io_md": false, 00:16:28.608 "write_zeroes": true, 00:16:28.608 "zcopy": true, 00:16:28.608 "get_zone_info": false, 00:16:28.608 "zone_management": false, 00:16:28.608 "zone_append": false, 00:16:28.608 "compare": false, 00:16:28.608 "compare_and_write": false, 00:16:28.608 "abort": true, 00:16:28.608 "seek_hole": false, 00:16:28.608 "seek_data": false, 00:16:28.608 "copy": true, 00:16:28.608 "nvme_iov_md": false 00:16:28.608 }, 00:16:28.608 "memory_domains": [ 00:16:28.608 { 00:16:28.608 "dma_device_id": "system", 00:16:28.608 "dma_device_type": 1 00:16:28.608 }, 00:16:28.608 { 00:16:28.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.608 "dma_device_type": 2 00:16:28.608 } 00:16:28.608 ], 00:16:28.608 "driver_specific": { 00:16:28.608 "passthru": { 00:16:28.608 "name": "pt1", 00:16:28.608 "base_bdev_name": "malloc1" 00:16:28.608 } 00:16:28.608 } 00:16:28.608 }' 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.608 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:28.867 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.125 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.125 "name": "pt2", 00:16:29.125 "aliases": [ 00:16:29.125 "00000000-0000-0000-0000-000000000002" 00:16:29.125 ], 00:16:29.125 "product_name": "passthru", 00:16:29.125 "block_size": 512, 00:16:29.125 "num_blocks": 65536, 00:16:29.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.125 "assigned_rate_limits": { 00:16:29.125 "rw_ios_per_sec": 0, 00:16:29.126 "rw_mbytes_per_sec": 0, 00:16:29.126 "r_mbytes_per_sec": 0, 00:16:29.126 "w_mbytes_per_sec": 0 00:16:29.126 }, 00:16:29.126 "claimed": true, 00:16:29.126 "claim_type": "exclusive_write", 00:16:29.126 "zoned": false, 00:16:29.126 "supported_io_types": { 00:16:29.126 "read": true, 00:16:29.126 "write": true, 00:16:29.126 "unmap": true, 00:16:29.126 "flush": true, 00:16:29.126 "reset": true, 00:16:29.126 "nvme_admin": false, 00:16:29.126 "nvme_io": false, 00:16:29.126 "nvme_io_md": false, 00:16:29.126 "write_zeroes": true, 00:16:29.126 "zcopy": true, 00:16:29.126 "get_zone_info": false, 00:16:29.126 "zone_management": false, 00:16:29.126 "zone_append": false, 00:16:29.126 "compare": false, 00:16:29.126 "compare_and_write": false, 00:16:29.126 "abort": true, 00:16:29.126 "seek_hole": false, 00:16:29.126 "seek_data": false, 00:16:29.126 "copy": true, 00:16:29.126 "nvme_iov_md": false 00:16:29.126 }, 00:16:29.126 "memory_domains": [ 00:16:29.126 { 00:16:29.126 "dma_device_id": "system", 00:16:29.126 "dma_device_type": 1 00:16:29.126 }, 00:16:29.126 { 00:16:29.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.126 "dma_device_type": 2 00:16:29.126 } 00:16:29.126 ], 00:16:29.126 "driver_specific": { 00:16:29.126 "passthru": { 00:16:29.126 "name": "pt2", 00:16:29.126 "base_bdev_name": "malloc2" 00:16:29.126 } 00:16:29.126 } 00:16:29.126 }' 00:16:29.126 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.126 10:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.126 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.126 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:29.385 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.644 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.644 "name": "pt3", 00:16:29.644 "aliases": [ 00:16:29.644 "00000000-0000-0000-0000-000000000003" 00:16:29.644 ], 00:16:29.644 "product_name": "passthru", 00:16:29.644 "block_size": 512, 00:16:29.644 "num_blocks": 65536, 00:16:29.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.644 "assigned_rate_limits": { 00:16:29.644 "rw_ios_per_sec": 0, 00:16:29.644 "rw_mbytes_per_sec": 0, 00:16:29.644 "r_mbytes_per_sec": 0, 00:16:29.644 "w_mbytes_per_sec": 0 00:16:29.644 }, 00:16:29.644 "claimed": true, 00:16:29.644 "claim_type": "exclusive_write", 00:16:29.644 "zoned": false, 00:16:29.644 "supported_io_types": { 00:16:29.644 "read": true, 00:16:29.644 "write": true, 00:16:29.644 "unmap": true, 00:16:29.644 "flush": true, 00:16:29.644 "reset": true, 00:16:29.644 "nvme_admin": false, 00:16:29.644 "nvme_io": false, 00:16:29.644 "nvme_io_md": false, 00:16:29.644 "write_zeroes": true, 00:16:29.644 "zcopy": true, 00:16:29.644 "get_zone_info": false, 00:16:29.644 "zone_management": false, 00:16:29.644 "zone_append": false, 00:16:29.644 "compare": false, 00:16:29.644 "compare_and_write": false, 00:16:29.644 "abort": true, 00:16:29.644 "seek_hole": false, 00:16:29.644 "seek_data": false, 00:16:29.644 "copy": true, 00:16:29.644 "nvme_iov_md": false 00:16:29.644 }, 00:16:29.644 "memory_domains": [ 00:16:29.644 { 00:16:29.644 "dma_device_id": "system", 00:16:29.644 "dma_device_type": 1 00:16:29.644 }, 00:16:29.644 { 00:16:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.644 "dma_device_type": 2 00:16:29.644 } 00:16:29.644 ], 00:16:29.644 "driver_specific": { 00:16:29.644 "passthru": { 00:16:29.644 "name": "pt3", 00:16:29.644 "base_bdev_name": "malloc3" 00:16:29.644 } 00:16:29.644 } 00:16:29.644 }' 00:16:29.644 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.644 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.644 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.903 10:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:30.471 [2024-07-26 10:26:43.266934] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.471 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=af1896f7-9655-4121-8cd8-33d0ea55021a 00:16:30.471 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z af1896f7-9655-4121-8cd8-33d0ea55021a ']' 00:16:30.471 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.728 [2024-07-26 10:26:43.511315] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.728 [2024-07-26 10:26:43.511330] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.728 [2024-07-26 10:26:43.511372] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.728 [2024-07-26 10:26:43.511422] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.728 [2024-07-26 10:26:43.511433] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ffd00 name raid_bdev1, state offline 00:16:30.728 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.728 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:30.986 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:30.986 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:30.986 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.986 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:31.245 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.245 10:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:31.503 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.503 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:31.762 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:32.021 [2024-07-26 10:26:44.802670] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:32.021 [2024-07-26 10:26:44.803936] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:32.021 [2024-07-26 10:26:44.803974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:32.021 [2024-07-26 10:26:44.804016] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:32.021 [2024-07-26 10:26:44.804053] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:32.021 [2024-07-26 10:26:44.804073] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:32.021 [2024-07-26 10:26:44.804090] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.021 [2024-07-26 10:26:44.804099] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16008d0 name raid_bdev1, state configuring 00:16:32.021 request: 00:16:32.021 { 00:16:32.021 "name": "raid_bdev1", 00:16:32.021 "raid_level": "raid0", 00:16:32.021 "base_bdevs": [ 00:16:32.021 "malloc1", 00:16:32.021 "malloc2", 00:16:32.021 "malloc3" 00:16:32.021 ], 00:16:32.021 "strip_size_kb": 64, 00:16:32.021 "superblock": false, 00:16:32.022 "method": "bdev_raid_create", 00:16:32.022 "req_id": 1 00:16:32.022 } 00:16:32.022 Got JSON-RPC error response 00:16:32.022 response: 00:16:32.022 { 00:16:32.022 "code": -17, 00:16:32.022 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:32.022 } 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.022 10:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:32.280 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:32.280 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:32.280 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.539 [2024-07-26 10:26:45.247785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.539 [2024-07-26 10:26:45.247819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.539 [2024-07-26 10:26:45.247835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16785b0 00:16:32.539 [2024-07-26 10:26:45.247851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.539 [2024-07-26 10:26:45.249287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.539 [2024-07-26 10:26:45.249313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.539 [2024-07-26 10:26:45.249370] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.539 [2024-07-26 10:26:45.249392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.539 pt1 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.539 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.800 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.800 "name": "raid_bdev1", 00:16:32.800 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:32.800 "strip_size_kb": 64, 00:16:32.800 "state": "configuring", 00:16:32.800 "raid_level": "raid0", 00:16:32.800 "superblock": true, 00:16:32.800 "num_base_bdevs": 3, 00:16:32.800 "num_base_bdevs_discovered": 1, 00:16:32.800 "num_base_bdevs_operational": 3, 00:16:32.800 "base_bdevs_list": [ 00:16:32.800 { 00:16:32.800 "name": "pt1", 00:16:32.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.800 "is_configured": true, 00:16:32.800 "data_offset": 2048, 00:16:32.800 "data_size": 63488 00:16:32.800 }, 00:16:32.800 { 00:16:32.800 "name": null, 00:16:32.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.800 "is_configured": false, 00:16:32.800 "data_offset": 2048, 00:16:32.800 "data_size": 63488 00:16:32.800 }, 00:16:32.800 { 00:16:32.800 "name": null, 00:16:32.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.800 "is_configured": false, 00:16:32.800 "data_offset": 2048, 00:16:32.800 "data_size": 63488 00:16:32.800 } 00:16:32.800 ] 00:16:32.800 }' 00:16:32.800 10:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.800 10:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.367 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:16:33.367 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.626 [2024-07-26 10:26:46.294555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.626 [2024-07-26 10:26:46.294599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.626 [2024-07-26 10:26:46.294618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1633a00 00:16:33.626 [2024-07-26 10:26:46.294629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.626 [2024-07-26 10:26:46.294932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.626 [2024-07-26 10:26:46.294947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.626 [2024-07-26 10:26:46.295001] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.626 [2024-07-26 10:26:46.295018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.626 pt2 00:16:33.626 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.626 [2024-07-26 10:26:46.523174] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.890 "name": "raid_bdev1", 00:16:33.890 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:33.890 "strip_size_kb": 64, 00:16:33.890 "state": "configuring", 00:16:33.890 "raid_level": "raid0", 00:16:33.890 "superblock": true, 00:16:33.890 "num_base_bdevs": 3, 00:16:33.890 "num_base_bdevs_discovered": 1, 00:16:33.890 "num_base_bdevs_operational": 3, 00:16:33.890 "base_bdevs_list": [ 00:16:33.890 { 00:16:33.890 "name": "pt1", 00:16:33.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.890 "is_configured": true, 00:16:33.890 "data_offset": 2048, 00:16:33.890 "data_size": 63488 00:16:33.890 }, 00:16:33.890 { 00:16:33.890 "name": null, 00:16:33.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.890 "is_configured": false, 00:16:33.890 "data_offset": 2048, 00:16:33.890 "data_size": 63488 00:16:33.890 }, 00:16:33.890 { 00:16:33.890 "name": null, 00:16:33.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.890 "is_configured": false, 00:16:33.890 "data_offset": 2048, 00:16:33.890 "data_size": 63488 00:16:33.890 } 00:16:33.890 ] 00:16:33.890 }' 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.890 10:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:34.493 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:34.493 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.752 [2024-07-26 10:26:47.517784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.752 [2024-07-26 10:26:47.517829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.752 [2024-07-26 10:26:47.517846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16008d0 00:16:34.752 [2024-07-26 10:26:47.517857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.752 [2024-07-26 10:26:47.518165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.752 [2024-07-26 10:26:47.518181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.752 [2024-07-26 10:26:47.518236] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:34.752 [2024-07-26 10:26:47.518254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.752 pt2 00:16:34.752 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:34.752 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:34.752 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.011 [2024-07-26 10:26:47.746392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.011 [2024-07-26 10:26:47.746425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.011 [2024-07-26 10:26:47.746441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1603530 00:16:35.011 [2024-07-26 10:26:47.746453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.011 [2024-07-26 10:26:47.746700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.011 [2024-07-26 10:26:47.746716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.011 [2024-07-26 10:26:47.746760] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.011 [2024-07-26 10:26:47.746775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.011 [2024-07-26 10:26:47.746866] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1600fa0 00:16:35.011 [2024-07-26 10:26:47.746875] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:35.011 [2024-07-26 10:26:47.747018] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1633c90 00:16:35.011 [2024-07-26 10:26:47.747127] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1600fa0 00:16:35.011 [2024-07-26 10:26:47.747136] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1600fa0 00:16:35.011 [2024-07-26 10:26:47.747226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.011 pt3 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.011 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.270 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.270 "name": "raid_bdev1", 00:16:35.270 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:35.270 "strip_size_kb": 64, 00:16:35.270 "state": "online", 00:16:35.270 "raid_level": "raid0", 00:16:35.270 "superblock": true, 00:16:35.270 "num_base_bdevs": 3, 00:16:35.270 "num_base_bdevs_discovered": 3, 00:16:35.270 "num_base_bdevs_operational": 3, 00:16:35.270 "base_bdevs_list": [ 00:16:35.270 { 00:16:35.270 "name": "pt1", 00:16:35.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.270 "is_configured": true, 00:16:35.270 "data_offset": 2048, 00:16:35.270 "data_size": 63488 00:16:35.270 }, 00:16:35.270 { 00:16:35.270 "name": "pt2", 00:16:35.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.270 "is_configured": true, 00:16:35.270 "data_offset": 2048, 00:16:35.270 "data_size": 63488 00:16:35.270 }, 00:16:35.270 { 00:16:35.270 "name": "pt3", 00:16:35.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.270 "is_configured": true, 00:16:35.270 "data_offset": 2048, 00:16:35.270 "data_size": 63488 00:16:35.270 } 00:16:35.270 ] 00:16:35.270 }' 00:16:35.270 10:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.270 10:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:35.837 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:36.097 [2024-07-26 10:26:48.781395] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:36.097 "name": "raid_bdev1", 00:16:36.097 "aliases": [ 00:16:36.097 "af1896f7-9655-4121-8cd8-33d0ea55021a" 00:16:36.097 ], 00:16:36.097 "product_name": "Raid Volume", 00:16:36.097 "block_size": 512, 00:16:36.097 "num_blocks": 190464, 00:16:36.097 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:36.097 "assigned_rate_limits": { 00:16:36.097 "rw_ios_per_sec": 0, 00:16:36.097 "rw_mbytes_per_sec": 0, 00:16:36.097 "r_mbytes_per_sec": 0, 00:16:36.097 "w_mbytes_per_sec": 0 00:16:36.097 }, 00:16:36.097 "claimed": false, 00:16:36.097 "zoned": false, 00:16:36.097 "supported_io_types": { 00:16:36.097 "read": true, 00:16:36.097 "write": true, 00:16:36.097 "unmap": true, 00:16:36.097 "flush": true, 00:16:36.097 "reset": true, 00:16:36.097 "nvme_admin": false, 00:16:36.097 "nvme_io": false, 00:16:36.097 "nvme_io_md": false, 00:16:36.097 "write_zeroes": true, 00:16:36.097 "zcopy": false, 00:16:36.097 "get_zone_info": false, 00:16:36.097 "zone_management": false, 00:16:36.097 "zone_append": false, 00:16:36.097 "compare": false, 00:16:36.097 "compare_and_write": false, 00:16:36.097 "abort": false, 00:16:36.097 "seek_hole": false, 00:16:36.097 "seek_data": false, 00:16:36.097 "copy": false, 00:16:36.097 "nvme_iov_md": false 00:16:36.097 }, 00:16:36.097 "memory_domains": [ 00:16:36.097 { 00:16:36.097 "dma_device_id": "system", 00:16:36.097 "dma_device_type": 1 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.097 "dma_device_type": 2 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "dma_device_id": "system", 00:16:36.097 "dma_device_type": 1 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.097 "dma_device_type": 2 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "dma_device_id": "system", 00:16:36.097 "dma_device_type": 1 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.097 "dma_device_type": 2 00:16:36.097 } 00:16:36.097 ], 00:16:36.097 "driver_specific": { 00:16:36.097 "raid": { 00:16:36.097 "uuid": "af1896f7-9655-4121-8cd8-33d0ea55021a", 00:16:36.097 "strip_size_kb": 64, 00:16:36.097 "state": "online", 00:16:36.097 "raid_level": "raid0", 00:16:36.097 "superblock": true, 00:16:36.097 "num_base_bdevs": 3, 00:16:36.097 "num_base_bdevs_discovered": 3, 00:16:36.097 "num_base_bdevs_operational": 3, 00:16:36.097 "base_bdevs_list": [ 00:16:36.097 { 00:16:36.097 "name": "pt1", 00:16:36.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.097 "is_configured": true, 00:16:36.097 "data_offset": 2048, 00:16:36.097 "data_size": 63488 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "name": "pt2", 00:16:36.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.097 "is_configured": true, 00:16:36.097 "data_offset": 2048, 00:16:36.097 "data_size": 63488 00:16:36.097 }, 00:16:36.097 { 00:16:36.097 "name": "pt3", 00:16:36.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.097 "is_configured": true, 00:16:36.097 "data_offset": 2048, 00:16:36.097 "data_size": 63488 00:16:36.097 } 00:16:36.097 ] 00:16:36.097 } 00:16:36.097 } 00:16:36.097 }' 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:36.097 pt2 00:16:36.097 pt3' 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:36.097 10:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.356 "name": "pt1", 00:16:36.356 "aliases": [ 00:16:36.356 "00000000-0000-0000-0000-000000000001" 00:16:36.356 ], 00:16:36.356 "product_name": "passthru", 00:16:36.356 "block_size": 512, 00:16:36.356 "num_blocks": 65536, 00:16:36.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.356 "assigned_rate_limits": { 00:16:36.356 "rw_ios_per_sec": 0, 00:16:36.356 "rw_mbytes_per_sec": 0, 00:16:36.356 "r_mbytes_per_sec": 0, 00:16:36.356 "w_mbytes_per_sec": 0 00:16:36.356 }, 00:16:36.356 "claimed": true, 00:16:36.356 "claim_type": "exclusive_write", 00:16:36.356 "zoned": false, 00:16:36.356 "supported_io_types": { 00:16:36.356 "read": true, 00:16:36.356 "write": true, 00:16:36.356 "unmap": true, 00:16:36.356 "flush": true, 00:16:36.356 "reset": true, 00:16:36.356 "nvme_admin": false, 00:16:36.356 "nvme_io": false, 00:16:36.356 "nvme_io_md": false, 00:16:36.356 "write_zeroes": true, 00:16:36.356 "zcopy": true, 00:16:36.356 "get_zone_info": false, 00:16:36.356 "zone_management": false, 00:16:36.356 "zone_append": false, 00:16:36.356 "compare": false, 00:16:36.356 "compare_and_write": false, 00:16:36.356 "abort": true, 00:16:36.356 "seek_hole": false, 00:16:36.356 "seek_data": false, 00:16:36.356 "copy": true, 00:16:36.356 "nvme_iov_md": false 00:16:36.356 }, 00:16:36.356 "memory_domains": [ 00:16:36.356 { 00:16:36.356 "dma_device_id": "system", 00:16:36.356 "dma_device_type": 1 00:16:36.356 }, 00:16:36.356 { 00:16:36.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.356 "dma_device_type": 2 00:16:36.356 } 00:16:36.356 ], 00:16:36.356 "driver_specific": { 00:16:36.356 "passthru": { 00:16:36.356 "name": "pt1", 00:16:36.356 "base_bdev_name": "malloc1" 00:16:36.356 } 00:16:36.356 } 00:16:36.356 }' 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.356 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.615 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:36.874 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.874 "name": "pt2", 00:16:36.874 "aliases": [ 00:16:36.874 "00000000-0000-0000-0000-000000000002" 00:16:36.874 ], 00:16:36.874 "product_name": "passthru", 00:16:36.874 "block_size": 512, 00:16:36.874 "num_blocks": 65536, 00:16:36.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.874 "assigned_rate_limits": { 00:16:36.874 "rw_ios_per_sec": 0, 00:16:36.874 "rw_mbytes_per_sec": 0, 00:16:36.874 "r_mbytes_per_sec": 0, 00:16:36.874 "w_mbytes_per_sec": 0 00:16:36.875 }, 00:16:36.875 "claimed": true, 00:16:36.875 "claim_type": "exclusive_write", 00:16:36.875 "zoned": false, 00:16:36.875 "supported_io_types": { 00:16:36.875 "read": true, 00:16:36.875 "write": true, 00:16:36.875 "unmap": true, 00:16:36.875 "flush": true, 00:16:36.875 "reset": true, 00:16:36.875 "nvme_admin": false, 00:16:36.875 "nvme_io": false, 00:16:36.875 "nvme_io_md": false, 00:16:36.875 "write_zeroes": true, 00:16:36.875 "zcopy": true, 00:16:36.875 "get_zone_info": false, 00:16:36.875 "zone_management": false, 00:16:36.875 "zone_append": false, 00:16:36.875 "compare": false, 00:16:36.875 "compare_and_write": false, 00:16:36.875 "abort": true, 00:16:36.875 "seek_hole": false, 00:16:36.875 "seek_data": false, 00:16:36.875 "copy": true, 00:16:36.875 "nvme_iov_md": false 00:16:36.875 }, 00:16:36.875 "memory_domains": [ 00:16:36.875 { 00:16:36.875 "dma_device_id": "system", 00:16:36.875 "dma_device_type": 1 00:16:36.875 }, 00:16:36.875 { 00:16:36.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.875 "dma_device_type": 2 00:16:36.875 } 00:16:36.875 ], 00:16:36.875 "driver_specific": { 00:16:36.875 "passthru": { 00:16:36.875 "name": "pt2", 00:16:36.875 "base_bdev_name": "malloc2" 00:16:36.875 } 00:16:36.875 } 00:16:36.875 }' 00:16:36.875 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.875 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.875 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.875 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.134 10:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:37.393 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.393 "name": "pt3", 00:16:37.393 "aliases": [ 00:16:37.393 "00000000-0000-0000-0000-000000000003" 00:16:37.393 ], 00:16:37.393 "product_name": "passthru", 00:16:37.393 "block_size": 512, 00:16:37.393 "num_blocks": 65536, 00:16:37.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.393 "assigned_rate_limits": { 00:16:37.393 "rw_ios_per_sec": 0, 00:16:37.393 "rw_mbytes_per_sec": 0, 00:16:37.393 "r_mbytes_per_sec": 0, 00:16:37.393 "w_mbytes_per_sec": 0 00:16:37.393 }, 00:16:37.393 "claimed": true, 00:16:37.393 "claim_type": "exclusive_write", 00:16:37.393 "zoned": false, 00:16:37.393 "supported_io_types": { 00:16:37.393 "read": true, 00:16:37.393 "write": true, 00:16:37.393 "unmap": true, 00:16:37.393 "flush": true, 00:16:37.393 "reset": true, 00:16:37.393 "nvme_admin": false, 00:16:37.393 "nvme_io": false, 00:16:37.393 "nvme_io_md": false, 00:16:37.393 "write_zeroes": true, 00:16:37.393 "zcopy": true, 00:16:37.393 "get_zone_info": false, 00:16:37.393 "zone_management": false, 00:16:37.393 "zone_append": false, 00:16:37.393 "compare": false, 00:16:37.393 "compare_and_write": false, 00:16:37.393 "abort": true, 00:16:37.393 "seek_hole": false, 00:16:37.393 "seek_data": false, 00:16:37.393 "copy": true, 00:16:37.393 "nvme_iov_md": false 00:16:37.393 }, 00:16:37.393 "memory_domains": [ 00:16:37.393 { 00:16:37.393 "dma_device_id": "system", 00:16:37.393 "dma_device_type": 1 00:16:37.393 }, 00:16:37.393 { 00:16:37.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.393 "dma_device_type": 2 00:16:37.393 } 00:16:37.393 ], 00:16:37.393 "driver_specific": { 00:16:37.393 "passthru": { 00:16:37.393 "name": "pt3", 00:16:37.393 "base_bdev_name": "malloc3" 00:16:37.393 } 00:16:37.393 } 00:16:37.393 }' 00:16:37.393 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.393 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.652 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:37.912 [2024-07-26 10:26:50.766610] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' af1896f7-9655-4121-8cd8-33d0ea55021a '!=' af1896f7-9655-4121-8cd8-33d0ea55021a ']' 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3380292 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3380292 ']' 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3380292 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.912 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3380292 00:16:38.172 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:38.172 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:38.172 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3380292' 00:16:38.172 killing process with pid 3380292 00:16:38.172 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3380292 00:16:38.172 [2024-07-26 10:26:50.843421] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.172 [2024-07-26 10:26:50.843473] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.172 [2024-07-26 10:26:50.843524] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.172 [2024-07-26 10:26:50.843535] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1600fa0 name raid_bdev1, state offline 00:16:38.172 10:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3380292 00:16:38.172 [2024-07-26 10:26:50.867148] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.172 10:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:38.172 00:16:38.172 real 0m13.516s 00:16:38.172 user 0m24.375s 00:16:38.172 sys 0m2.381s 00:16:38.172 10:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.172 10:26:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.172 ************************************ 00:16:38.172 END TEST raid_superblock_test 00:16:38.172 ************************************ 00:16:38.432 10:26:51 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:16:38.432 10:26:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.432 10:26:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.432 10:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 ************************************ 00:16:38.432 START TEST raid_read_error_test 00:16:38.432 ************************************ 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NCQ57qwKfd 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3382950 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3382950 /var/tmp/spdk-raid.sock 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3382950 ']' 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.432 10:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 [2024-07-26 10:26:51.187303] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:38.432 [2024-07-26 10:26:51.187358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382950 ] 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:38.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:38.432 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:38.432 [2024-07-26 10:26:51.319143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.692 [2024-07-26 10:26:51.364454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.692 [2024-07-26 10:26:51.419666] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.692 [2024-07-26 10:26:51.419697] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.260 10:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.260 10:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:39.260 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:39.260 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:39.519 BaseBdev1_malloc 00:16:39.519 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:39.778 true 00:16:39.778 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:40.038 [2024-07-26 10:26:52.742540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:40.038 [2024-07-26 10:26:52.742580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.038 [2024-07-26 10:26:52.742605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fdf7c0 00:16:40.038 [2024-07-26 10:26:52.742617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.038 [2024-07-26 10:26:52.744148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.038 [2024-07-26 10:26:52.744175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.038 BaseBdev1 00:16:40.038 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:40.038 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:40.297 BaseBdev2_malloc 00:16:40.297 10:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:40.556 true 00:16:40.556 10:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:40.557 [2024-07-26 10:26:53.428724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:40.557 [2024-07-26 10:26:53.428764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.557 [2024-07-26 10:26:53.428786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f86960 00:16:40.557 [2024-07-26 10:26:53.428797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.557 [2024-07-26 10:26:53.430148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.557 [2024-07-26 10:26:53.430174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:40.557 BaseBdev2 00:16:40.557 10:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:40.557 10:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:40.815 BaseBdev3_malloc 00:16:40.815 10:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:41.074 true 00:16:41.074 10:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:41.332 [2024-07-26 10:26:54.114837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:41.332 [2024-07-26 10:26:54.114875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.332 [2024-07-26 10:26:54.114894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f89720 00:16:41.332 [2024-07-26 10:26:54.114906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.332 [2024-07-26 10:26:54.116268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.332 [2024-07-26 10:26:54.116294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:41.332 BaseBdev3 00:16:41.332 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:41.591 [2024-07-26 10:26:54.339472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.591 [2024-07-26 10:26:54.340649] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.591 [2024-07-26 10:26:54.340709] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.591 [2024-07-26 10:26:54.340879] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f885b0 00:16:41.591 [2024-07-26 10:26:54.340889] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.591 [2024-07-26 10:26:54.341065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8dd20 00:16:41.591 [2024-07-26 10:26:54.341203] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f885b0 00:16:41.591 [2024-07-26 10:26:54.341212] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f885b0 00:16:41.591 [2024-07-26 10:26:54.341323] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.591 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.592 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.592 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.851 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.851 "name": "raid_bdev1", 00:16:41.851 "uuid": "ee42d3be-4350-44ac-ba78-f7286989ab5f", 00:16:41.851 "strip_size_kb": 64, 00:16:41.851 "state": "online", 00:16:41.851 "raid_level": "raid0", 00:16:41.851 "superblock": true, 00:16:41.851 "num_base_bdevs": 3, 00:16:41.851 "num_base_bdevs_discovered": 3, 00:16:41.851 "num_base_bdevs_operational": 3, 00:16:41.851 "base_bdevs_list": [ 00:16:41.851 { 00:16:41.851 "name": "BaseBdev1", 00:16:41.851 "uuid": "0d03cb6f-6aed-5687-bf80-830576673cbd", 00:16:41.851 "is_configured": true, 00:16:41.851 "data_offset": 2048, 00:16:41.851 "data_size": 63488 00:16:41.851 }, 00:16:41.851 { 00:16:41.851 "name": "BaseBdev2", 00:16:41.851 "uuid": "66d3881a-572c-53c7-89d2-c2c406723732", 00:16:41.851 "is_configured": true, 00:16:41.851 "data_offset": 2048, 00:16:41.851 "data_size": 63488 00:16:41.851 }, 00:16:41.851 { 00:16:41.851 "name": "BaseBdev3", 00:16:41.851 "uuid": "596e6463-fa80-5313-b0d9-120e6146e0c4", 00:16:41.851 "is_configured": true, 00:16:41.851 "data_offset": 2048, 00:16:41.851 "data_size": 63488 00:16:41.851 } 00:16:41.851 ] 00:16:41.851 }' 00:16:41.851 10:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.851 10:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 10:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:42.419 10:26:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:42.419 [2024-07-26 10:26:55.201957] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f8ad20 00:16:43.355 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.614 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.873 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.873 "name": "raid_bdev1", 00:16:43.873 "uuid": "ee42d3be-4350-44ac-ba78-f7286989ab5f", 00:16:43.873 "strip_size_kb": 64, 00:16:43.873 "state": "online", 00:16:43.873 "raid_level": "raid0", 00:16:43.873 "superblock": true, 00:16:43.873 "num_base_bdevs": 3, 00:16:43.873 "num_base_bdevs_discovered": 3, 00:16:43.873 "num_base_bdevs_operational": 3, 00:16:43.873 "base_bdevs_list": [ 00:16:43.873 { 00:16:43.873 "name": "BaseBdev1", 00:16:43.873 "uuid": "0d03cb6f-6aed-5687-bf80-830576673cbd", 00:16:43.873 "is_configured": true, 00:16:43.873 "data_offset": 2048, 00:16:43.873 "data_size": 63488 00:16:43.873 }, 00:16:43.873 { 00:16:43.873 "name": "BaseBdev2", 00:16:43.873 "uuid": "66d3881a-572c-53c7-89d2-c2c406723732", 00:16:43.873 "is_configured": true, 00:16:43.873 "data_offset": 2048, 00:16:43.873 "data_size": 63488 00:16:43.873 }, 00:16:43.873 { 00:16:43.873 "name": "BaseBdev3", 00:16:43.873 "uuid": "596e6463-fa80-5313-b0d9-120e6146e0c4", 00:16:43.873 "is_configured": true, 00:16:43.873 "data_offset": 2048, 00:16:43.873 "data_size": 63488 00:16:43.873 } 00:16:43.873 ] 00:16:43.873 }' 00:16:43.873 10:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.873 10:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.442 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:44.442 [2024-07-26 10:26:57.316915] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.442 [2024-07-26 10:26:57.316948] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.442 [2024-07-26 10:26:57.319867] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.442 [2024-07-26 10:26:57.319901] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.442 [2024-07-26 10:26:57.319932] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.442 [2024-07-26 10:26:57.319942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f885b0 name raid_bdev1, state offline 00:16:44.442 0 00:16:44.442 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3382950 00:16:44.442 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3382950 ']' 00:16:44.442 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3382950 00:16:44.442 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3382950 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3382950' 00:16:44.701 killing process with pid 3382950 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3382950 00:16:44.701 [2024-07-26 10:26:57.397056] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3382950 00:16:44.701 [2024-07-26 10:26:57.415597] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NCQ57qwKfd 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:44.701 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:44.960 00:16:44.960 real 0m6.492s 00:16:44.960 user 0m10.186s 00:16:44.960 sys 0m1.157s 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.960 10:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.960 ************************************ 00:16:44.960 END TEST raid_read_error_test 00:16:44.960 ************************************ 00:16:44.960 10:26:57 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:16:44.960 10:26:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:44.960 10:26:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.960 10:26:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.960 ************************************ 00:16:44.960 START TEST raid_write_error_test 00:16:44.961 ************************************ 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ZyQYxv4P03 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3384113 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3384113 /var/tmp/spdk-raid.sock 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3384113 ']' 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:44.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.961 10:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.961 [2024-07-26 10:26:57.773061] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:44.961 [2024-07-26 10:26:57.773119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384113 ] 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:44.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.961 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:45.220 [2024-07-26 10:26:57.908973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.220 [2024-07-26 10:26:57.952646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.220 [2024-07-26 10:26:58.007035] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.220 [2024-07-26 10:26:58.007068] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.788 10:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.788 10:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:45.788 10:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:45.788 10:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.048 BaseBdev1_malloc 00:16:46.048 10:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:46.306 true 00:16:46.306 10:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.565 [2024-07-26 10:26:59.333215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.565 [2024-07-26 10:26:59.333259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.565 [2024-07-26 10:26:59.333277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd7b7c0 00:16:46.565 [2024-07-26 10:26:59.333289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.565 [2024-07-26 10:26:59.334833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.565 [2024-07-26 10:26:59.334860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.565 BaseBdev1 00:16:46.565 10:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:46.565 10:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.823 BaseBdev2_malloc 00:16:46.823 10:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:47.109 true 00:16:47.109 10:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:47.368 [2024-07-26 10:27:00.027408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:47.368 [2024-07-26 10:27:00.027451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.368 [2024-07-26 10:27:00.027475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd22960 00:16:47.368 [2024-07-26 10:27:00.027488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.368 [2024-07-26 10:27:00.028809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.368 [2024-07-26 10:27:00.028838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.368 BaseBdev2 00:16:47.368 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:47.368 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.368 BaseBdev3_malloc 00:16:47.628 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:47.628 true 00:16:47.628 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:47.886 [2024-07-26 10:27:00.725388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:47.886 [2024-07-26 10:27:00.725431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.886 [2024-07-26 10:27:00.725448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd25720 00:16:47.886 [2024-07-26 10:27:00.725460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.886 [2024-07-26 10:27:00.726769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.886 [2024-07-26 10:27:00.726796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:47.886 BaseBdev3 00:16:47.886 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:48.145 [2024-07-26 10:27:00.950014] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.145 [2024-07-26 10:27:00.951176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.145 [2024-07-26 10:27:00.951238] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.145 [2024-07-26 10:27:00.951409] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd245b0 00:16:48.145 [2024-07-26 10:27:00.951420] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:48.145 [2024-07-26 10:27:00.951597] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd29d20 00:16:48.145 [2024-07-26 10:27:00.951723] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd245b0 00:16:48.145 [2024-07-26 10:27:00.951732] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd245b0 00:16:48.145 [2024-07-26 10:27:00.951838] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.145 10:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.404 10:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.404 "name": "raid_bdev1", 00:16:48.404 "uuid": "eb96422b-bee2-45a5-b4c9-7805a74d2e1e", 00:16:48.404 "strip_size_kb": 64, 00:16:48.404 "state": "online", 00:16:48.404 "raid_level": "raid0", 00:16:48.404 "superblock": true, 00:16:48.404 "num_base_bdevs": 3, 00:16:48.404 "num_base_bdevs_discovered": 3, 00:16:48.404 "num_base_bdevs_operational": 3, 00:16:48.404 "base_bdevs_list": [ 00:16:48.404 { 00:16:48.404 "name": "BaseBdev1", 00:16:48.404 "uuid": "19d089e9-27b9-56fb-a65e-775e02d2273d", 00:16:48.404 "is_configured": true, 00:16:48.404 "data_offset": 2048, 00:16:48.404 "data_size": 63488 00:16:48.404 }, 00:16:48.404 { 00:16:48.404 "name": "BaseBdev2", 00:16:48.404 "uuid": "539477ac-8708-57a7-8c2c-5a8f8c0d8cfc", 00:16:48.404 "is_configured": true, 00:16:48.404 "data_offset": 2048, 00:16:48.404 "data_size": 63488 00:16:48.404 }, 00:16:48.404 { 00:16:48.404 "name": "BaseBdev3", 00:16:48.404 "uuid": "64d36595-57d7-51bf-a23f-b800c5abdc0c", 00:16:48.404 "is_configured": true, 00:16:48.404 "data_offset": 2048, 00:16:48.404 "data_size": 63488 00:16:48.404 } 00:16:48.404 ] 00:16:48.404 }' 00:16:48.404 10:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.404 10:27:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.972 10:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:48.972 10:27:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:49.232 [2024-07-26 10:27:01.876682] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd26d20 00:16:50.170 10:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.170 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.429 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.429 "name": "raid_bdev1", 00:16:50.429 "uuid": "eb96422b-bee2-45a5-b4c9-7805a74d2e1e", 00:16:50.429 "strip_size_kb": 64, 00:16:50.429 "state": "online", 00:16:50.429 "raid_level": "raid0", 00:16:50.430 "superblock": true, 00:16:50.430 "num_base_bdevs": 3, 00:16:50.430 "num_base_bdevs_discovered": 3, 00:16:50.430 "num_base_bdevs_operational": 3, 00:16:50.430 "base_bdevs_list": [ 00:16:50.430 { 00:16:50.430 "name": "BaseBdev1", 00:16:50.430 "uuid": "19d089e9-27b9-56fb-a65e-775e02d2273d", 00:16:50.430 "is_configured": true, 00:16:50.430 "data_offset": 2048, 00:16:50.430 "data_size": 63488 00:16:50.430 }, 00:16:50.430 { 00:16:50.430 "name": "BaseBdev2", 00:16:50.430 "uuid": "539477ac-8708-57a7-8c2c-5a8f8c0d8cfc", 00:16:50.430 "is_configured": true, 00:16:50.430 "data_offset": 2048, 00:16:50.430 "data_size": 63488 00:16:50.430 }, 00:16:50.430 { 00:16:50.430 "name": "BaseBdev3", 00:16:50.430 "uuid": "64d36595-57d7-51bf-a23f-b800c5abdc0c", 00:16:50.430 "is_configured": true, 00:16:50.430 "data_offset": 2048, 00:16:50.430 "data_size": 63488 00:16:50.430 } 00:16:50.430 ] 00:16:50.430 }' 00:16:50.430 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.430 10:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.997 10:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:51.256 [2024-07-26 10:27:04.031839] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.256 [2024-07-26 10:27:04.031871] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.256 [2024-07-26 10:27:04.034800] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.256 [2024-07-26 10:27:04.034834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.256 [2024-07-26 10:27:04.034864] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.256 [2024-07-26 10:27:04.034874] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd245b0 name raid_bdev1, state offline 00:16:51.256 0 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3384113 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3384113 ']' 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3384113 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3384113 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3384113' 00:16:51.256 killing process with pid 3384113 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3384113 00:16:51.256 [2024-07-26 10:27:04.104022] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.256 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3384113 00:16:51.256 [2024-07-26 10:27:04.122710] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ZyQYxv4P03 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:51.515 00:16:51.515 real 0m6.617s 00:16:51.515 user 0m10.386s 00:16:51.515 sys 0m1.213s 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.515 10:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.515 ************************************ 00:16:51.515 END TEST raid_write_error_test 00:16:51.515 ************************************ 00:16:51.515 10:27:04 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:51.515 10:27:04 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:51.515 10:27:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:51.515 10:27:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.515 10:27:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.515 ************************************ 00:16:51.515 START TEST raid_state_function_test 00:16:51.515 ************************************ 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3385711 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3385711' 00:16:51.515 Process raid pid: 3385711 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3385711 /var/tmp/spdk-raid.sock 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3385711 ']' 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:51.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.515 10:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.774 [2024-07-26 10:27:04.463544] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:16:51.774 [2024-07-26 10:27:04.463603] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:51.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:51.774 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:51.774 [2024-07-26 10:27:04.595729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.774 [2024-07-26 10:27:04.640395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.032 [2024-07-26 10:27:04.695941] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.032 [2024-07-26 10:27:04.695965] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.600 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.600 10:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:52.600 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.859 [2024-07-26 10:27:05.547779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.859 [2024-07-26 10:27:05.547816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.859 [2024-07-26 10:27:05.547826] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.859 [2024-07-26 10:27:05.547837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.859 [2024-07-26 10:27:05.547845] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.859 [2024-07-26 10:27:05.547855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.859 10:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.426 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.426 "name": "Existed_Raid", 00:16:53.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.426 "strip_size_kb": 64, 00:16:53.426 "state": "configuring", 00:16:53.426 "raid_level": "concat", 00:16:53.426 "superblock": false, 00:16:53.426 "num_base_bdevs": 3, 00:16:53.426 "num_base_bdevs_discovered": 0, 00:16:53.426 "num_base_bdevs_operational": 3, 00:16:53.426 "base_bdevs_list": [ 00:16:53.426 { 00:16:53.426 "name": "BaseBdev1", 00:16:53.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.426 "is_configured": false, 00:16:53.426 "data_offset": 0, 00:16:53.426 "data_size": 0 00:16:53.426 }, 00:16:53.426 { 00:16:53.426 "name": "BaseBdev2", 00:16:53.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.426 "is_configured": false, 00:16:53.426 "data_offset": 0, 00:16:53.426 "data_size": 0 00:16:53.426 }, 00:16:53.426 { 00:16:53.426 "name": "BaseBdev3", 00:16:53.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.426 "is_configured": false, 00:16:53.426 "data_offset": 0, 00:16:53.426 "data_size": 0 00:16:53.426 } 00:16:53.426 ] 00:16:53.426 }' 00:16:53.426 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.426 10:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.994 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.994 [2024-07-26 10:27:06.831026] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.994 [2024-07-26 10:27:06.831059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x191db70 name Existed_Raid, state configuring 00:16:53.994 10:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.252 [2024-07-26 10:27:07.059648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.252 [2024-07-26 10:27:07.059673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.252 [2024-07-26 10:27:07.059682] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.252 [2024-07-26 10:27:07.059693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.252 [2024-07-26 10:27:07.059701] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.252 [2024-07-26 10:27:07.059711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.252 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.511 [2024-07-26 10:27:07.289581] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.511 BaseBdev1 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.511 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.770 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.029 [ 00:16:55.029 { 00:16:55.029 "name": "BaseBdev1", 00:16:55.029 "aliases": [ 00:16:55.029 "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349" 00:16:55.029 ], 00:16:55.029 "product_name": "Malloc disk", 00:16:55.029 "block_size": 512, 00:16:55.029 "num_blocks": 65536, 00:16:55.029 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:16:55.029 "assigned_rate_limits": { 00:16:55.029 "rw_ios_per_sec": 0, 00:16:55.029 "rw_mbytes_per_sec": 0, 00:16:55.029 "r_mbytes_per_sec": 0, 00:16:55.029 "w_mbytes_per_sec": 0 00:16:55.029 }, 00:16:55.029 "claimed": true, 00:16:55.029 "claim_type": "exclusive_write", 00:16:55.029 "zoned": false, 00:16:55.029 "supported_io_types": { 00:16:55.029 "read": true, 00:16:55.029 "write": true, 00:16:55.029 "unmap": true, 00:16:55.029 "flush": true, 00:16:55.029 "reset": true, 00:16:55.029 "nvme_admin": false, 00:16:55.029 "nvme_io": false, 00:16:55.029 "nvme_io_md": false, 00:16:55.029 "write_zeroes": true, 00:16:55.029 "zcopy": true, 00:16:55.029 "get_zone_info": false, 00:16:55.029 "zone_management": false, 00:16:55.029 "zone_append": false, 00:16:55.029 "compare": false, 00:16:55.029 "compare_and_write": false, 00:16:55.029 "abort": true, 00:16:55.029 "seek_hole": false, 00:16:55.029 "seek_data": false, 00:16:55.029 "copy": true, 00:16:55.029 "nvme_iov_md": false 00:16:55.029 }, 00:16:55.029 "memory_domains": [ 00:16:55.029 { 00:16:55.029 "dma_device_id": "system", 00:16:55.029 "dma_device_type": 1 00:16:55.029 }, 00:16:55.029 { 00:16:55.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.029 "dma_device_type": 2 00:16:55.029 } 00:16:55.029 ], 00:16:55.029 "driver_specific": {} 00:16:55.029 } 00:16:55.029 ] 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.029 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.030 "name": "Existed_Raid", 00:16:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.030 "strip_size_kb": 64, 00:16:55.030 "state": "configuring", 00:16:55.030 "raid_level": "concat", 00:16:55.030 "superblock": false, 00:16:55.030 "num_base_bdevs": 3, 00:16:55.030 "num_base_bdevs_discovered": 1, 00:16:55.030 "num_base_bdevs_operational": 3, 00:16:55.030 "base_bdevs_list": [ 00:16:55.030 { 00:16:55.030 "name": "BaseBdev1", 00:16:55.030 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:16:55.030 "is_configured": true, 00:16:55.030 "data_offset": 0, 00:16:55.030 "data_size": 65536 00:16:55.030 }, 00:16:55.030 { 00:16:55.030 "name": "BaseBdev2", 00:16:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.030 "is_configured": false, 00:16:55.030 "data_offset": 0, 00:16:55.030 "data_size": 0 00:16:55.030 }, 00:16:55.030 { 00:16:55.030 "name": "BaseBdev3", 00:16:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.030 "is_configured": false, 00:16:55.030 "data_offset": 0, 00:16:55.030 "data_size": 0 00:16:55.030 } 00:16:55.030 ] 00:16:55.030 }' 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.030 10:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.599 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.858 [2024-07-26 10:27:08.669220] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.858 [2024-07-26 10:27:08.669256] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x191d4a0 name Existed_Raid, state configuring 00:16:55.858 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.118 [2024-07-26 10:27:08.897854] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.118 [2024-07-26 10:27:08.899202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.118 [2024-07-26 10:27:08.899233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.118 [2024-07-26 10:27:08.899242] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.118 [2024-07-26 10:27:08.899253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.118 10:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.377 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.377 "name": "Existed_Raid", 00:16:56.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.377 "strip_size_kb": 64, 00:16:56.377 "state": "configuring", 00:16:56.377 "raid_level": "concat", 00:16:56.377 "superblock": false, 00:16:56.377 "num_base_bdevs": 3, 00:16:56.377 "num_base_bdevs_discovered": 1, 00:16:56.377 "num_base_bdevs_operational": 3, 00:16:56.377 "base_bdevs_list": [ 00:16:56.377 { 00:16:56.377 "name": "BaseBdev1", 00:16:56.377 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:16:56.377 "is_configured": true, 00:16:56.377 "data_offset": 0, 00:16:56.377 "data_size": 65536 00:16:56.377 }, 00:16:56.377 { 00:16:56.377 "name": "BaseBdev2", 00:16:56.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.377 "is_configured": false, 00:16:56.377 "data_offset": 0, 00:16:56.377 "data_size": 0 00:16:56.377 }, 00:16:56.377 { 00:16:56.377 "name": "BaseBdev3", 00:16:56.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.377 "is_configured": false, 00:16:56.377 "data_offset": 0, 00:16:56.377 "data_size": 0 00:16:56.377 } 00:16:56.377 ] 00:16:56.377 }' 00:16:56.377 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.377 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.946 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.206 [2024-07-26 10:27:09.927990] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.206 BaseBdev2 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.206 10:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.466 10:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.725 [ 00:16:57.725 { 00:16:57.725 "name": "BaseBdev2", 00:16:57.725 "aliases": [ 00:16:57.725 "4baf6e61-a26c-4b8a-8911-da4793180421" 00:16:57.725 ], 00:16:57.725 "product_name": "Malloc disk", 00:16:57.725 "block_size": 512, 00:16:57.725 "num_blocks": 65536, 00:16:57.725 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:16:57.725 "assigned_rate_limits": { 00:16:57.725 "rw_ios_per_sec": 0, 00:16:57.725 "rw_mbytes_per_sec": 0, 00:16:57.725 "r_mbytes_per_sec": 0, 00:16:57.725 "w_mbytes_per_sec": 0 00:16:57.725 }, 00:16:57.725 "claimed": true, 00:16:57.725 "claim_type": "exclusive_write", 00:16:57.725 "zoned": false, 00:16:57.725 "supported_io_types": { 00:16:57.725 "read": true, 00:16:57.725 "write": true, 00:16:57.725 "unmap": true, 00:16:57.725 "flush": true, 00:16:57.725 "reset": true, 00:16:57.725 "nvme_admin": false, 00:16:57.725 "nvme_io": false, 00:16:57.725 "nvme_io_md": false, 00:16:57.725 "write_zeroes": true, 00:16:57.725 "zcopy": true, 00:16:57.725 "get_zone_info": false, 00:16:57.725 "zone_management": false, 00:16:57.725 "zone_append": false, 00:16:57.725 "compare": false, 00:16:57.725 "compare_and_write": false, 00:16:57.725 "abort": true, 00:16:57.725 "seek_hole": false, 00:16:57.725 "seek_data": false, 00:16:57.725 "copy": true, 00:16:57.725 "nvme_iov_md": false 00:16:57.725 }, 00:16:57.725 "memory_domains": [ 00:16:57.725 { 00:16:57.725 "dma_device_id": "system", 00:16:57.725 "dma_device_type": 1 00:16:57.725 }, 00:16:57.725 { 00:16:57.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.725 "dma_device_type": 2 00:16:57.725 } 00:16:57.725 ], 00:16:57.725 "driver_specific": {} 00:16:57.725 } 00:16:57.725 ] 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.725 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.984 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.984 "name": "Existed_Raid", 00:16:57.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.984 "strip_size_kb": 64, 00:16:57.984 "state": "configuring", 00:16:57.984 "raid_level": "concat", 00:16:57.984 "superblock": false, 00:16:57.984 "num_base_bdevs": 3, 00:16:57.984 "num_base_bdevs_discovered": 2, 00:16:57.984 "num_base_bdevs_operational": 3, 00:16:57.984 "base_bdevs_list": [ 00:16:57.984 { 00:16:57.984 "name": "BaseBdev1", 00:16:57.984 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:16:57.984 "is_configured": true, 00:16:57.984 "data_offset": 0, 00:16:57.984 "data_size": 65536 00:16:57.984 }, 00:16:57.984 { 00:16:57.984 "name": "BaseBdev2", 00:16:57.984 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:16:57.984 "is_configured": true, 00:16:57.984 "data_offset": 0, 00:16:57.984 "data_size": 65536 00:16:57.984 }, 00:16:57.984 { 00:16:57.984 "name": "BaseBdev3", 00:16:57.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.984 "is_configured": false, 00:16:57.984 "data_offset": 0, 00:16:57.984 "data_size": 0 00:16:57.984 } 00:16:57.984 ] 00:16:57.984 }' 00:16:57.984 10:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.984 10:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.553 [2024-07-26 10:27:11.419010] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.553 [2024-07-26 10:27:11.419042] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ad02d0 00:16:58.553 [2024-07-26 10:27:11.419050] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:58.553 [2024-07-26 10:27:11.419293] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19212d0 00:16:58.553 [2024-07-26 10:27:11.419401] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ad02d0 00:16:58.553 [2024-07-26 10:27:11.419414] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ad02d0 00:16:58.553 [2024-07-26 10:27:11.419561] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.553 BaseBdev3 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:58.553 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.812 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:59.071 [ 00:16:59.071 { 00:16:59.071 "name": "BaseBdev3", 00:16:59.071 "aliases": [ 00:16:59.071 "e87a0ea6-e7fb-4c68-83bd-5b358e939624" 00:16:59.071 ], 00:16:59.071 "product_name": "Malloc disk", 00:16:59.071 "block_size": 512, 00:16:59.071 "num_blocks": 65536, 00:16:59.071 "uuid": "e87a0ea6-e7fb-4c68-83bd-5b358e939624", 00:16:59.071 "assigned_rate_limits": { 00:16:59.071 "rw_ios_per_sec": 0, 00:16:59.071 "rw_mbytes_per_sec": 0, 00:16:59.071 "r_mbytes_per_sec": 0, 00:16:59.071 "w_mbytes_per_sec": 0 00:16:59.071 }, 00:16:59.071 "claimed": true, 00:16:59.071 "claim_type": "exclusive_write", 00:16:59.071 "zoned": false, 00:16:59.071 "supported_io_types": { 00:16:59.071 "read": true, 00:16:59.071 "write": true, 00:16:59.071 "unmap": true, 00:16:59.071 "flush": true, 00:16:59.071 "reset": true, 00:16:59.071 "nvme_admin": false, 00:16:59.071 "nvme_io": false, 00:16:59.071 "nvme_io_md": false, 00:16:59.071 "write_zeroes": true, 00:16:59.071 "zcopy": true, 00:16:59.071 "get_zone_info": false, 00:16:59.071 "zone_management": false, 00:16:59.071 "zone_append": false, 00:16:59.071 "compare": false, 00:16:59.071 "compare_and_write": false, 00:16:59.071 "abort": true, 00:16:59.071 "seek_hole": false, 00:16:59.071 "seek_data": false, 00:16:59.071 "copy": true, 00:16:59.071 "nvme_iov_md": false 00:16:59.071 }, 00:16:59.071 "memory_domains": [ 00:16:59.071 { 00:16:59.071 "dma_device_id": "system", 00:16:59.071 "dma_device_type": 1 00:16:59.071 }, 00:16:59.071 { 00:16:59.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.071 "dma_device_type": 2 00:16:59.071 } 00:16:59.071 ], 00:16:59.071 "driver_specific": {} 00:16:59.071 } 00:16:59.071 ] 00:16:59.071 10:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:59.071 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:59.071 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:59.071 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:59.071 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.072 10:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.331 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.331 "name": "Existed_Raid", 00:16:59.331 "uuid": "12329dcf-516d-4113-9583-fc66c0766d56", 00:16:59.332 "strip_size_kb": 64, 00:16:59.332 "state": "online", 00:16:59.332 "raid_level": "concat", 00:16:59.332 "superblock": false, 00:16:59.332 "num_base_bdevs": 3, 00:16:59.332 "num_base_bdevs_discovered": 3, 00:16:59.332 "num_base_bdevs_operational": 3, 00:16:59.332 "base_bdevs_list": [ 00:16:59.332 { 00:16:59.332 "name": "BaseBdev1", 00:16:59.332 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:16:59.332 "is_configured": true, 00:16:59.332 "data_offset": 0, 00:16:59.332 "data_size": 65536 00:16:59.332 }, 00:16:59.332 { 00:16:59.332 "name": "BaseBdev2", 00:16:59.332 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:16:59.332 "is_configured": true, 00:16:59.332 "data_offset": 0, 00:16:59.332 "data_size": 65536 00:16:59.332 }, 00:16:59.332 { 00:16:59.332 "name": "BaseBdev3", 00:16:59.332 "uuid": "e87a0ea6-e7fb-4c68-83bd-5b358e939624", 00:16:59.332 "is_configured": true, 00:16:59.332 "data_offset": 0, 00:16:59.332 "data_size": 65536 00:16:59.332 } 00:16:59.332 ] 00:16:59.332 }' 00:16:59.332 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.332 10:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.930 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:00.189 [2024-07-26 10:27:12.903191] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.189 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:00.189 "name": "Existed_Raid", 00:17:00.189 "aliases": [ 00:17:00.189 "12329dcf-516d-4113-9583-fc66c0766d56" 00:17:00.189 ], 00:17:00.189 "product_name": "Raid Volume", 00:17:00.189 "block_size": 512, 00:17:00.189 "num_blocks": 196608, 00:17:00.189 "uuid": "12329dcf-516d-4113-9583-fc66c0766d56", 00:17:00.189 "assigned_rate_limits": { 00:17:00.189 "rw_ios_per_sec": 0, 00:17:00.189 "rw_mbytes_per_sec": 0, 00:17:00.189 "r_mbytes_per_sec": 0, 00:17:00.189 "w_mbytes_per_sec": 0 00:17:00.189 }, 00:17:00.189 "claimed": false, 00:17:00.189 "zoned": false, 00:17:00.189 "supported_io_types": { 00:17:00.189 "read": true, 00:17:00.189 "write": true, 00:17:00.189 "unmap": true, 00:17:00.189 "flush": true, 00:17:00.189 "reset": true, 00:17:00.189 "nvme_admin": false, 00:17:00.189 "nvme_io": false, 00:17:00.189 "nvme_io_md": false, 00:17:00.189 "write_zeroes": true, 00:17:00.189 "zcopy": false, 00:17:00.189 "get_zone_info": false, 00:17:00.189 "zone_management": false, 00:17:00.189 "zone_append": false, 00:17:00.189 "compare": false, 00:17:00.189 "compare_and_write": false, 00:17:00.189 "abort": false, 00:17:00.189 "seek_hole": false, 00:17:00.189 "seek_data": false, 00:17:00.189 "copy": false, 00:17:00.189 "nvme_iov_md": false 00:17:00.189 }, 00:17:00.189 "memory_domains": [ 00:17:00.189 { 00:17:00.189 "dma_device_id": "system", 00:17:00.189 "dma_device_type": 1 00:17:00.189 }, 00:17:00.189 { 00:17:00.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.190 "dma_device_type": 2 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "dma_device_id": "system", 00:17:00.190 "dma_device_type": 1 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.190 "dma_device_type": 2 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "dma_device_id": "system", 00:17:00.190 "dma_device_type": 1 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.190 "dma_device_type": 2 00:17:00.190 } 00:17:00.190 ], 00:17:00.190 "driver_specific": { 00:17:00.190 "raid": { 00:17:00.190 "uuid": "12329dcf-516d-4113-9583-fc66c0766d56", 00:17:00.190 "strip_size_kb": 64, 00:17:00.190 "state": "online", 00:17:00.190 "raid_level": "concat", 00:17:00.190 "superblock": false, 00:17:00.190 "num_base_bdevs": 3, 00:17:00.190 "num_base_bdevs_discovered": 3, 00:17:00.190 "num_base_bdevs_operational": 3, 00:17:00.190 "base_bdevs_list": [ 00:17:00.190 { 00:17:00.190 "name": "BaseBdev1", 00:17:00.190 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:17:00.190 "is_configured": true, 00:17:00.190 "data_offset": 0, 00:17:00.190 "data_size": 65536 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "name": "BaseBdev2", 00:17:00.190 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:17:00.190 "is_configured": true, 00:17:00.190 "data_offset": 0, 00:17:00.190 "data_size": 65536 00:17:00.190 }, 00:17:00.190 { 00:17:00.190 "name": "BaseBdev3", 00:17:00.190 "uuid": "e87a0ea6-e7fb-4c68-83bd-5b358e939624", 00:17:00.190 "is_configured": true, 00:17:00.190 "data_offset": 0, 00:17:00.190 "data_size": 65536 00:17:00.190 } 00:17:00.190 ] 00:17:00.190 } 00:17:00.190 } 00:17:00.190 }' 00:17:00.190 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.190 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:00.190 BaseBdev2 00:17:00.190 BaseBdev3' 00:17:00.190 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.190 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:00.190 10:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.449 "name": "BaseBdev1", 00:17:00.449 "aliases": [ 00:17:00.449 "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349" 00:17:00.449 ], 00:17:00.449 "product_name": "Malloc disk", 00:17:00.449 "block_size": 512, 00:17:00.449 "num_blocks": 65536, 00:17:00.449 "uuid": "8ea84ac7-d6d0-4709-ab26-6b3b8d9c7349", 00:17:00.449 "assigned_rate_limits": { 00:17:00.449 "rw_ios_per_sec": 0, 00:17:00.449 "rw_mbytes_per_sec": 0, 00:17:00.449 "r_mbytes_per_sec": 0, 00:17:00.449 "w_mbytes_per_sec": 0 00:17:00.449 }, 00:17:00.449 "claimed": true, 00:17:00.449 "claim_type": "exclusive_write", 00:17:00.449 "zoned": false, 00:17:00.449 "supported_io_types": { 00:17:00.449 "read": true, 00:17:00.449 "write": true, 00:17:00.449 "unmap": true, 00:17:00.449 "flush": true, 00:17:00.449 "reset": true, 00:17:00.449 "nvme_admin": false, 00:17:00.449 "nvme_io": false, 00:17:00.449 "nvme_io_md": false, 00:17:00.449 "write_zeroes": true, 00:17:00.449 "zcopy": true, 00:17:00.449 "get_zone_info": false, 00:17:00.449 "zone_management": false, 00:17:00.449 "zone_append": false, 00:17:00.449 "compare": false, 00:17:00.449 "compare_and_write": false, 00:17:00.449 "abort": true, 00:17:00.449 "seek_hole": false, 00:17:00.449 "seek_data": false, 00:17:00.449 "copy": true, 00:17:00.449 "nvme_iov_md": false 00:17:00.449 }, 00:17:00.449 "memory_domains": [ 00:17:00.449 { 00:17:00.449 "dma_device_id": "system", 00:17:00.449 "dma_device_type": 1 00:17:00.449 }, 00:17:00.449 { 00:17:00.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.449 "dma_device_type": 2 00:17:00.449 } 00:17:00.449 ], 00:17:00.449 "driver_specific": {} 00:17:00.449 }' 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.449 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:00.707 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.966 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.966 "name": "BaseBdev2", 00:17:00.966 "aliases": [ 00:17:00.966 "4baf6e61-a26c-4b8a-8911-da4793180421" 00:17:00.966 ], 00:17:00.966 "product_name": "Malloc disk", 00:17:00.966 "block_size": 512, 00:17:00.966 "num_blocks": 65536, 00:17:00.966 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:17:00.966 "assigned_rate_limits": { 00:17:00.966 "rw_ios_per_sec": 0, 00:17:00.966 "rw_mbytes_per_sec": 0, 00:17:00.966 "r_mbytes_per_sec": 0, 00:17:00.966 "w_mbytes_per_sec": 0 00:17:00.966 }, 00:17:00.966 "claimed": true, 00:17:00.966 "claim_type": "exclusive_write", 00:17:00.966 "zoned": false, 00:17:00.966 "supported_io_types": { 00:17:00.966 "read": true, 00:17:00.966 "write": true, 00:17:00.966 "unmap": true, 00:17:00.966 "flush": true, 00:17:00.966 "reset": true, 00:17:00.966 "nvme_admin": false, 00:17:00.966 "nvme_io": false, 00:17:00.966 "nvme_io_md": false, 00:17:00.966 "write_zeroes": true, 00:17:00.966 "zcopy": true, 00:17:00.966 "get_zone_info": false, 00:17:00.966 "zone_management": false, 00:17:00.966 "zone_append": false, 00:17:00.966 "compare": false, 00:17:00.966 "compare_and_write": false, 00:17:00.966 "abort": true, 00:17:00.966 "seek_hole": false, 00:17:00.966 "seek_data": false, 00:17:00.966 "copy": true, 00:17:00.966 "nvme_iov_md": false 00:17:00.966 }, 00:17:00.966 "memory_domains": [ 00:17:00.966 { 00:17:00.966 "dma_device_id": "system", 00:17:00.966 "dma_device_type": 1 00:17:00.966 }, 00:17:00.966 { 00:17:00.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.966 "dma_device_type": 2 00:17:00.966 } 00:17:00.966 ], 00:17:00.966 "driver_specific": {} 00:17:00.966 }' 00:17:00.966 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.966 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.966 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.966 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.225 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.225 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.225 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.225 10:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:01.225 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.484 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.484 "name": "BaseBdev3", 00:17:01.484 "aliases": [ 00:17:01.484 "e87a0ea6-e7fb-4c68-83bd-5b358e939624" 00:17:01.484 ], 00:17:01.484 "product_name": "Malloc disk", 00:17:01.484 "block_size": 512, 00:17:01.484 "num_blocks": 65536, 00:17:01.484 "uuid": "e87a0ea6-e7fb-4c68-83bd-5b358e939624", 00:17:01.484 "assigned_rate_limits": { 00:17:01.484 "rw_ios_per_sec": 0, 00:17:01.484 "rw_mbytes_per_sec": 0, 00:17:01.484 "r_mbytes_per_sec": 0, 00:17:01.484 "w_mbytes_per_sec": 0 00:17:01.484 }, 00:17:01.484 "claimed": true, 00:17:01.484 "claim_type": "exclusive_write", 00:17:01.484 "zoned": false, 00:17:01.484 "supported_io_types": { 00:17:01.484 "read": true, 00:17:01.484 "write": true, 00:17:01.484 "unmap": true, 00:17:01.484 "flush": true, 00:17:01.484 "reset": true, 00:17:01.484 "nvme_admin": false, 00:17:01.484 "nvme_io": false, 00:17:01.484 "nvme_io_md": false, 00:17:01.484 "write_zeroes": true, 00:17:01.484 "zcopy": true, 00:17:01.484 "get_zone_info": false, 00:17:01.484 "zone_management": false, 00:17:01.484 "zone_append": false, 00:17:01.484 "compare": false, 00:17:01.484 "compare_and_write": false, 00:17:01.484 "abort": true, 00:17:01.484 "seek_hole": false, 00:17:01.484 "seek_data": false, 00:17:01.484 "copy": true, 00:17:01.484 "nvme_iov_md": false 00:17:01.484 }, 00:17:01.484 "memory_domains": [ 00:17:01.484 { 00:17:01.484 "dma_device_id": "system", 00:17:01.484 "dma_device_type": 1 00:17:01.484 }, 00:17:01.484 { 00:17:01.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.484 "dma_device_type": 2 00:17:01.484 } 00:17:01.484 ], 00:17:01.484 "driver_specific": {} 00:17:01.484 }' 00:17:01.484 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.484 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.743 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.743 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.743 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.743 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.743 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.744 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.744 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.744 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.744 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.744 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.003 [2024-07-26 10:27:14.852086] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.003 [2024-07-26 10:27:14.852109] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.003 [2024-07-26 10:27:14.852156] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.003 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.004 10:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.263 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.263 "name": "Existed_Raid", 00:17:02.263 "uuid": "12329dcf-516d-4113-9583-fc66c0766d56", 00:17:02.263 "strip_size_kb": 64, 00:17:02.263 "state": "offline", 00:17:02.263 "raid_level": "concat", 00:17:02.263 "superblock": false, 00:17:02.263 "num_base_bdevs": 3, 00:17:02.263 "num_base_bdevs_discovered": 2, 00:17:02.263 "num_base_bdevs_operational": 2, 00:17:02.263 "base_bdevs_list": [ 00:17:02.263 { 00:17:02.263 "name": null, 00:17:02.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.263 "is_configured": false, 00:17:02.263 "data_offset": 0, 00:17:02.263 "data_size": 65536 00:17:02.263 }, 00:17:02.263 { 00:17:02.263 "name": "BaseBdev2", 00:17:02.263 "uuid": "4baf6e61-a26c-4b8a-8911-da4793180421", 00:17:02.263 "is_configured": true, 00:17:02.263 "data_offset": 0, 00:17:02.263 "data_size": 65536 00:17:02.263 }, 00:17:02.263 { 00:17:02.263 "name": "BaseBdev3", 00:17:02.263 "uuid": "e87a0ea6-e7fb-4c68-83bd-5b358e939624", 00:17:02.263 "is_configured": true, 00:17:02.263 "data_offset": 0, 00:17:02.263 "data_size": 65536 00:17:02.263 } 00:17:02.263 ] 00:17:02.263 }' 00:17:02.263 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.263 10:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.832 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:02.832 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:02.832 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.832 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:03.092 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:03.092 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.092 10:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:03.351 [2024-07-26 10:27:16.124507] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:03.351 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:03.351 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.351 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.351 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:03.610 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:03.610 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.610 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:03.869 [2024-07-26 10:27:16.595630] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:03.869 [2024-07-26 10:27:16.595667] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ad02d0 name Existed_Raid, state offline 00:17:03.869 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:03.869 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.869 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.869 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:04.129 10:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.388 BaseBdev2 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:04.388 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.647 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:04.647 [ 00:17:04.647 { 00:17:04.647 "name": "BaseBdev2", 00:17:04.647 "aliases": [ 00:17:04.647 "07ebeff4-30b9-499e-b1c9-dd9f924b0919" 00:17:04.647 ], 00:17:04.647 "product_name": "Malloc disk", 00:17:04.647 "block_size": 512, 00:17:04.647 "num_blocks": 65536, 00:17:04.647 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:04.647 "assigned_rate_limits": { 00:17:04.647 "rw_ios_per_sec": 0, 00:17:04.647 "rw_mbytes_per_sec": 0, 00:17:04.647 "r_mbytes_per_sec": 0, 00:17:04.647 "w_mbytes_per_sec": 0 00:17:04.647 }, 00:17:04.647 "claimed": false, 00:17:04.647 "zoned": false, 00:17:04.647 "supported_io_types": { 00:17:04.647 "read": true, 00:17:04.647 "write": true, 00:17:04.647 "unmap": true, 00:17:04.647 "flush": true, 00:17:04.647 "reset": true, 00:17:04.647 "nvme_admin": false, 00:17:04.647 "nvme_io": false, 00:17:04.647 "nvme_io_md": false, 00:17:04.647 "write_zeroes": true, 00:17:04.647 "zcopy": true, 00:17:04.647 "get_zone_info": false, 00:17:04.647 "zone_management": false, 00:17:04.647 "zone_append": false, 00:17:04.647 "compare": false, 00:17:04.647 "compare_and_write": false, 00:17:04.647 "abort": true, 00:17:04.647 "seek_hole": false, 00:17:04.647 "seek_data": false, 00:17:04.647 "copy": true, 00:17:04.647 "nvme_iov_md": false 00:17:04.647 }, 00:17:04.647 "memory_domains": [ 00:17:04.647 { 00:17:04.647 "dma_device_id": "system", 00:17:04.647 "dma_device_type": 1 00:17:04.647 }, 00:17:04.647 { 00:17:04.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.647 "dma_device_type": 2 00:17:04.647 } 00:17:04.647 ], 00:17:04.647 "driver_specific": {} 00:17:04.647 } 00:17:04.647 ] 00:17:04.647 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:04.647 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:04.647 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:04.647 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:04.906 BaseBdev3 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:04.906 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.165 10:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.424 [ 00:17:05.424 { 00:17:05.424 "name": "BaseBdev3", 00:17:05.424 "aliases": [ 00:17:05.424 "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a" 00:17:05.424 ], 00:17:05.424 "product_name": "Malloc disk", 00:17:05.424 "block_size": 512, 00:17:05.424 "num_blocks": 65536, 00:17:05.424 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:05.424 "assigned_rate_limits": { 00:17:05.424 "rw_ios_per_sec": 0, 00:17:05.424 "rw_mbytes_per_sec": 0, 00:17:05.424 "r_mbytes_per_sec": 0, 00:17:05.424 "w_mbytes_per_sec": 0 00:17:05.424 }, 00:17:05.424 "claimed": false, 00:17:05.424 "zoned": false, 00:17:05.424 "supported_io_types": { 00:17:05.424 "read": true, 00:17:05.424 "write": true, 00:17:05.424 "unmap": true, 00:17:05.424 "flush": true, 00:17:05.424 "reset": true, 00:17:05.424 "nvme_admin": false, 00:17:05.424 "nvme_io": false, 00:17:05.424 "nvme_io_md": false, 00:17:05.424 "write_zeroes": true, 00:17:05.424 "zcopy": true, 00:17:05.424 "get_zone_info": false, 00:17:05.424 "zone_management": false, 00:17:05.424 "zone_append": false, 00:17:05.424 "compare": false, 00:17:05.424 "compare_and_write": false, 00:17:05.424 "abort": true, 00:17:05.424 "seek_hole": false, 00:17:05.424 "seek_data": false, 00:17:05.424 "copy": true, 00:17:05.424 "nvme_iov_md": false 00:17:05.424 }, 00:17:05.424 "memory_domains": [ 00:17:05.424 { 00:17:05.424 "dma_device_id": "system", 00:17:05.424 "dma_device_type": 1 00:17:05.424 }, 00:17:05.424 { 00:17:05.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.424 "dma_device_type": 2 00:17:05.424 } 00:17:05.424 ], 00:17:05.424 "driver_specific": {} 00:17:05.424 } 00:17:05.424 ] 00:17:05.424 10:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:05.424 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:05.424 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.424 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:05.684 [2024-07-26 10:27:18.414763] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.684 [2024-07-26 10:27:18.414802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.684 [2024-07-26 10:27:18.414821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.684 [2024-07-26 10:27:18.416045] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.684 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.943 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.943 "name": "Existed_Raid", 00:17:05.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.943 "strip_size_kb": 64, 00:17:05.943 "state": "configuring", 00:17:05.943 "raid_level": "concat", 00:17:05.943 "superblock": false, 00:17:05.943 "num_base_bdevs": 3, 00:17:05.943 "num_base_bdevs_discovered": 2, 00:17:05.943 "num_base_bdevs_operational": 3, 00:17:05.943 "base_bdevs_list": [ 00:17:05.943 { 00:17:05.943 "name": "BaseBdev1", 00:17:05.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.943 "is_configured": false, 00:17:05.943 "data_offset": 0, 00:17:05.943 "data_size": 0 00:17:05.943 }, 00:17:05.943 { 00:17:05.943 "name": "BaseBdev2", 00:17:05.943 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:05.943 "is_configured": true, 00:17:05.943 "data_offset": 0, 00:17:05.943 "data_size": 65536 00:17:05.943 }, 00:17:05.943 { 00:17:05.943 "name": "BaseBdev3", 00:17:05.943 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:05.943 "is_configured": true, 00:17:05.943 "data_offset": 0, 00:17:05.943 "data_size": 65536 00:17:05.943 } 00:17:05.943 ] 00:17:05.943 }' 00:17:05.943 10:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.943 10:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:06.769 [2024-07-26 10:27:19.437442] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.769 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.028 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.028 "name": "Existed_Raid", 00:17:07.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.028 "strip_size_kb": 64, 00:17:07.028 "state": "configuring", 00:17:07.028 "raid_level": "concat", 00:17:07.028 "superblock": false, 00:17:07.028 "num_base_bdevs": 3, 00:17:07.028 "num_base_bdevs_discovered": 1, 00:17:07.028 "num_base_bdevs_operational": 3, 00:17:07.028 "base_bdevs_list": [ 00:17:07.028 { 00:17:07.028 "name": "BaseBdev1", 00:17:07.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.028 "is_configured": false, 00:17:07.028 "data_offset": 0, 00:17:07.028 "data_size": 0 00:17:07.028 }, 00:17:07.028 { 00:17:07.028 "name": null, 00:17:07.028 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:07.028 "is_configured": false, 00:17:07.028 "data_offset": 0, 00:17:07.028 "data_size": 65536 00:17:07.028 }, 00:17:07.028 { 00:17:07.028 "name": "BaseBdev3", 00:17:07.028 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:07.028 "is_configured": true, 00:17:07.028 "data_offset": 0, 00:17:07.028 "data_size": 65536 00:17:07.028 } 00:17:07.028 ] 00:17:07.028 }' 00:17:07.028 10:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.028 10:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.596 10:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.596 10:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:07.596 10:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:07.596 10:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.855 [2024-07-26 10:27:20.699994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.855 BaseBdev1 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:07.855 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.114 10:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.373 [ 00:17:08.373 { 00:17:08.373 "name": "BaseBdev1", 00:17:08.373 "aliases": [ 00:17:08.373 "fcba80f8-e619-4edc-9cf4-a0d8fb14420c" 00:17:08.373 ], 00:17:08.373 "product_name": "Malloc disk", 00:17:08.373 "block_size": 512, 00:17:08.373 "num_blocks": 65536, 00:17:08.373 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:08.373 "assigned_rate_limits": { 00:17:08.373 "rw_ios_per_sec": 0, 00:17:08.373 "rw_mbytes_per_sec": 0, 00:17:08.373 "r_mbytes_per_sec": 0, 00:17:08.373 "w_mbytes_per_sec": 0 00:17:08.373 }, 00:17:08.373 "claimed": true, 00:17:08.373 "claim_type": "exclusive_write", 00:17:08.373 "zoned": false, 00:17:08.373 "supported_io_types": { 00:17:08.373 "read": true, 00:17:08.373 "write": true, 00:17:08.373 "unmap": true, 00:17:08.373 "flush": true, 00:17:08.373 "reset": true, 00:17:08.373 "nvme_admin": false, 00:17:08.373 "nvme_io": false, 00:17:08.373 "nvme_io_md": false, 00:17:08.373 "write_zeroes": true, 00:17:08.373 "zcopy": true, 00:17:08.373 "get_zone_info": false, 00:17:08.373 "zone_management": false, 00:17:08.373 "zone_append": false, 00:17:08.373 "compare": false, 00:17:08.374 "compare_and_write": false, 00:17:08.374 "abort": true, 00:17:08.374 "seek_hole": false, 00:17:08.374 "seek_data": false, 00:17:08.374 "copy": true, 00:17:08.374 "nvme_iov_md": false 00:17:08.374 }, 00:17:08.374 "memory_domains": [ 00:17:08.374 { 00:17:08.374 "dma_device_id": "system", 00:17:08.374 "dma_device_type": 1 00:17:08.374 }, 00:17:08.374 { 00:17:08.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.374 "dma_device_type": 2 00:17:08.374 } 00:17:08.374 ], 00:17:08.374 "driver_specific": {} 00:17:08.374 } 00:17:08.374 ] 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.374 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.633 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.633 "name": "Existed_Raid", 00:17:08.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.633 "strip_size_kb": 64, 00:17:08.633 "state": "configuring", 00:17:08.633 "raid_level": "concat", 00:17:08.633 "superblock": false, 00:17:08.633 "num_base_bdevs": 3, 00:17:08.633 "num_base_bdevs_discovered": 2, 00:17:08.633 "num_base_bdevs_operational": 3, 00:17:08.633 "base_bdevs_list": [ 00:17:08.633 { 00:17:08.633 "name": "BaseBdev1", 00:17:08.633 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:08.633 "is_configured": true, 00:17:08.633 "data_offset": 0, 00:17:08.633 "data_size": 65536 00:17:08.633 }, 00:17:08.633 { 00:17:08.633 "name": null, 00:17:08.633 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:08.633 "is_configured": false, 00:17:08.633 "data_offset": 0, 00:17:08.633 "data_size": 65536 00:17:08.633 }, 00:17:08.633 { 00:17:08.633 "name": "BaseBdev3", 00:17:08.633 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:08.633 "is_configured": true, 00:17:08.633 "data_offset": 0, 00:17:08.633 "data_size": 65536 00:17:08.633 } 00:17:08.633 ] 00:17:08.633 }' 00:17:08.633 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.633 10:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.200 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.200 10:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.459 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:09.459 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:09.718 [2024-07-26 10:27:22.400531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.718 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.978 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.978 "name": "Existed_Raid", 00:17:09.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.978 "strip_size_kb": 64, 00:17:09.978 "state": "configuring", 00:17:09.978 "raid_level": "concat", 00:17:09.978 "superblock": false, 00:17:09.978 "num_base_bdevs": 3, 00:17:09.978 "num_base_bdevs_discovered": 1, 00:17:09.978 "num_base_bdevs_operational": 3, 00:17:09.978 "base_bdevs_list": [ 00:17:09.978 { 00:17:09.978 "name": "BaseBdev1", 00:17:09.978 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:09.978 "is_configured": true, 00:17:09.978 "data_offset": 0, 00:17:09.978 "data_size": 65536 00:17:09.978 }, 00:17:09.978 { 00:17:09.978 "name": null, 00:17:09.978 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:09.978 "is_configured": false, 00:17:09.978 "data_offset": 0, 00:17:09.978 "data_size": 65536 00:17:09.978 }, 00:17:09.978 { 00:17:09.978 "name": null, 00:17:09.978 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:09.978 "is_configured": false, 00:17:09.978 "data_offset": 0, 00:17:09.978 "data_size": 65536 00:17:09.978 } 00:17:09.978 ] 00:17:09.978 }' 00:17:09.978 10:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.978 10:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.552 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.552 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:10.813 [2024-07-26 10:27:23.663889] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.813 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.072 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.072 "name": "Existed_Raid", 00:17:11.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.072 "strip_size_kb": 64, 00:17:11.072 "state": "configuring", 00:17:11.072 "raid_level": "concat", 00:17:11.072 "superblock": false, 00:17:11.072 "num_base_bdevs": 3, 00:17:11.072 "num_base_bdevs_discovered": 2, 00:17:11.072 "num_base_bdevs_operational": 3, 00:17:11.072 "base_bdevs_list": [ 00:17:11.072 { 00:17:11.072 "name": "BaseBdev1", 00:17:11.072 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:11.072 "is_configured": true, 00:17:11.072 "data_offset": 0, 00:17:11.072 "data_size": 65536 00:17:11.072 }, 00:17:11.072 { 00:17:11.072 "name": null, 00:17:11.072 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:11.072 "is_configured": false, 00:17:11.072 "data_offset": 0, 00:17:11.072 "data_size": 65536 00:17:11.072 }, 00:17:11.072 { 00:17:11.072 "name": "BaseBdev3", 00:17:11.072 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:11.072 "is_configured": true, 00:17:11.072 "data_offset": 0, 00:17:11.072 "data_size": 65536 00:17:11.072 } 00:17:11.072 ] 00:17:11.072 }' 00:17:11.072 10:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.072 10:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.639 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.639 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:11.898 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:11.898 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:12.157 [2024-07-26 10:27:24.919384] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.157 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.158 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.158 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.158 10:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.416 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.416 "name": "Existed_Raid", 00:17:12.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.416 "strip_size_kb": 64, 00:17:12.416 "state": "configuring", 00:17:12.416 "raid_level": "concat", 00:17:12.416 "superblock": false, 00:17:12.416 "num_base_bdevs": 3, 00:17:12.416 "num_base_bdevs_discovered": 1, 00:17:12.416 "num_base_bdevs_operational": 3, 00:17:12.416 "base_bdevs_list": [ 00:17:12.416 { 00:17:12.416 "name": null, 00:17:12.416 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:12.416 "is_configured": false, 00:17:12.416 "data_offset": 0, 00:17:12.416 "data_size": 65536 00:17:12.416 }, 00:17:12.416 { 00:17:12.416 "name": null, 00:17:12.416 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:12.416 "is_configured": false, 00:17:12.416 "data_offset": 0, 00:17:12.416 "data_size": 65536 00:17:12.416 }, 00:17:12.416 { 00:17:12.416 "name": "BaseBdev3", 00:17:12.416 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:12.416 "is_configured": true, 00:17:12.416 "data_offset": 0, 00:17:12.416 "data_size": 65536 00:17:12.416 } 00:17:12.416 ] 00:17:12.416 }' 00:17:12.416 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.416 10:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.015 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.015 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:13.273 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:13.273 10:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:13.532 [2024-07-26 10:27:26.196945] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.532 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.791 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.791 "name": "Existed_Raid", 00:17:13.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.791 "strip_size_kb": 64, 00:17:13.791 "state": "configuring", 00:17:13.791 "raid_level": "concat", 00:17:13.791 "superblock": false, 00:17:13.791 "num_base_bdevs": 3, 00:17:13.791 "num_base_bdevs_discovered": 2, 00:17:13.791 "num_base_bdevs_operational": 3, 00:17:13.791 "base_bdevs_list": [ 00:17:13.791 { 00:17:13.791 "name": null, 00:17:13.791 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:13.791 "is_configured": false, 00:17:13.791 "data_offset": 0, 00:17:13.791 "data_size": 65536 00:17:13.791 }, 00:17:13.791 { 00:17:13.791 "name": "BaseBdev2", 00:17:13.791 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:13.791 "is_configured": true, 00:17:13.791 "data_offset": 0, 00:17:13.791 "data_size": 65536 00:17:13.791 }, 00:17:13.791 { 00:17:13.791 "name": "BaseBdev3", 00:17:13.791 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:13.791 "is_configured": true, 00:17:13.791 "data_offset": 0, 00:17:13.791 "data_size": 65536 00:17:13.791 } 00:17:13.791 ] 00:17:13.791 }' 00:17:13.791 10:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.791 10:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.358 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.358 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.358 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:14.358 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.358 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.616 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fcba80f8-e619-4edc-9cf4-a0d8fb14420c 00:17:14.875 [2024-07-26 10:27:27.683920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:14.875 [2024-07-26 10:27:27.683951] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x191cd60 00:17:14.875 [2024-07-26 10:27:27.683960] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:14.875 [2024-07-26 10:27:27.684134] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1920f70 00:17:14.875 [2024-07-26 10:27:27.684247] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x191cd60 00:17:14.875 [2024-07-26 10:27:27.684256] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x191cd60 00:17:14.876 [2024-07-26 10:27:27.684403] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.876 NewBaseBdev 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:14.876 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.135 10:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:15.394 [ 00:17:15.394 { 00:17:15.394 "name": "NewBaseBdev", 00:17:15.394 "aliases": [ 00:17:15.394 "fcba80f8-e619-4edc-9cf4-a0d8fb14420c" 00:17:15.394 ], 00:17:15.394 "product_name": "Malloc disk", 00:17:15.394 "block_size": 512, 00:17:15.394 "num_blocks": 65536, 00:17:15.394 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:15.394 "assigned_rate_limits": { 00:17:15.394 "rw_ios_per_sec": 0, 00:17:15.394 "rw_mbytes_per_sec": 0, 00:17:15.394 "r_mbytes_per_sec": 0, 00:17:15.394 "w_mbytes_per_sec": 0 00:17:15.394 }, 00:17:15.394 "claimed": true, 00:17:15.394 "claim_type": "exclusive_write", 00:17:15.394 "zoned": false, 00:17:15.394 "supported_io_types": { 00:17:15.394 "read": true, 00:17:15.394 "write": true, 00:17:15.394 "unmap": true, 00:17:15.394 "flush": true, 00:17:15.394 "reset": true, 00:17:15.394 "nvme_admin": false, 00:17:15.394 "nvme_io": false, 00:17:15.394 "nvme_io_md": false, 00:17:15.394 "write_zeroes": true, 00:17:15.394 "zcopy": true, 00:17:15.394 "get_zone_info": false, 00:17:15.394 "zone_management": false, 00:17:15.394 "zone_append": false, 00:17:15.394 "compare": false, 00:17:15.394 "compare_and_write": false, 00:17:15.394 "abort": true, 00:17:15.394 "seek_hole": false, 00:17:15.394 "seek_data": false, 00:17:15.394 "copy": true, 00:17:15.394 "nvme_iov_md": false 00:17:15.394 }, 00:17:15.394 "memory_domains": [ 00:17:15.394 { 00:17:15.394 "dma_device_id": "system", 00:17:15.394 "dma_device_type": 1 00:17:15.394 }, 00:17:15.394 { 00:17:15.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.394 "dma_device_type": 2 00:17:15.394 } 00:17:15.394 ], 00:17:15.394 "driver_specific": {} 00:17:15.394 } 00:17:15.394 ] 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.394 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.654 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.654 "name": "Existed_Raid", 00:17:15.654 "uuid": "4849e55a-a3b5-46cc-821d-818e7ec324a9", 00:17:15.654 "strip_size_kb": 64, 00:17:15.654 "state": "online", 00:17:15.654 "raid_level": "concat", 00:17:15.654 "superblock": false, 00:17:15.654 "num_base_bdevs": 3, 00:17:15.654 "num_base_bdevs_discovered": 3, 00:17:15.654 "num_base_bdevs_operational": 3, 00:17:15.654 "base_bdevs_list": [ 00:17:15.654 { 00:17:15.654 "name": "NewBaseBdev", 00:17:15.654 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:15.654 "is_configured": true, 00:17:15.654 "data_offset": 0, 00:17:15.654 "data_size": 65536 00:17:15.654 }, 00:17:15.654 { 00:17:15.654 "name": "BaseBdev2", 00:17:15.654 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:15.654 "is_configured": true, 00:17:15.654 "data_offset": 0, 00:17:15.654 "data_size": 65536 00:17:15.654 }, 00:17:15.654 { 00:17:15.654 "name": "BaseBdev3", 00:17:15.654 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:15.654 "is_configured": true, 00:17:15.654 "data_offset": 0, 00:17:15.654 "data_size": 65536 00:17:15.654 } 00:17:15.654 ] 00:17:15.654 }' 00:17:15.654 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.654 10:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:16.222 10:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:16.482 [2024-07-26 10:27:29.160078] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:16.482 "name": "Existed_Raid", 00:17:16.482 "aliases": [ 00:17:16.482 "4849e55a-a3b5-46cc-821d-818e7ec324a9" 00:17:16.482 ], 00:17:16.482 "product_name": "Raid Volume", 00:17:16.482 "block_size": 512, 00:17:16.482 "num_blocks": 196608, 00:17:16.482 "uuid": "4849e55a-a3b5-46cc-821d-818e7ec324a9", 00:17:16.482 "assigned_rate_limits": { 00:17:16.482 "rw_ios_per_sec": 0, 00:17:16.482 "rw_mbytes_per_sec": 0, 00:17:16.482 "r_mbytes_per_sec": 0, 00:17:16.482 "w_mbytes_per_sec": 0 00:17:16.482 }, 00:17:16.482 "claimed": false, 00:17:16.482 "zoned": false, 00:17:16.482 "supported_io_types": { 00:17:16.482 "read": true, 00:17:16.482 "write": true, 00:17:16.482 "unmap": true, 00:17:16.482 "flush": true, 00:17:16.482 "reset": true, 00:17:16.482 "nvme_admin": false, 00:17:16.482 "nvme_io": false, 00:17:16.482 "nvme_io_md": false, 00:17:16.482 "write_zeroes": true, 00:17:16.482 "zcopy": false, 00:17:16.482 "get_zone_info": false, 00:17:16.482 "zone_management": false, 00:17:16.482 "zone_append": false, 00:17:16.482 "compare": false, 00:17:16.482 "compare_and_write": false, 00:17:16.482 "abort": false, 00:17:16.482 "seek_hole": false, 00:17:16.482 "seek_data": false, 00:17:16.482 "copy": false, 00:17:16.482 "nvme_iov_md": false 00:17:16.482 }, 00:17:16.482 "memory_domains": [ 00:17:16.482 { 00:17:16.482 "dma_device_id": "system", 00:17:16.482 "dma_device_type": 1 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.482 "dma_device_type": 2 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "dma_device_id": "system", 00:17:16.482 "dma_device_type": 1 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.482 "dma_device_type": 2 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "dma_device_id": "system", 00:17:16.482 "dma_device_type": 1 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.482 "dma_device_type": 2 00:17:16.482 } 00:17:16.482 ], 00:17:16.482 "driver_specific": { 00:17:16.482 "raid": { 00:17:16.482 "uuid": "4849e55a-a3b5-46cc-821d-818e7ec324a9", 00:17:16.482 "strip_size_kb": 64, 00:17:16.482 "state": "online", 00:17:16.482 "raid_level": "concat", 00:17:16.482 "superblock": false, 00:17:16.482 "num_base_bdevs": 3, 00:17:16.482 "num_base_bdevs_discovered": 3, 00:17:16.482 "num_base_bdevs_operational": 3, 00:17:16.482 "base_bdevs_list": [ 00:17:16.482 { 00:17:16.482 "name": "NewBaseBdev", 00:17:16.482 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:16.482 "is_configured": true, 00:17:16.482 "data_offset": 0, 00:17:16.482 "data_size": 65536 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "name": "BaseBdev2", 00:17:16.482 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:16.482 "is_configured": true, 00:17:16.482 "data_offset": 0, 00:17:16.482 "data_size": 65536 00:17:16.482 }, 00:17:16.482 { 00:17:16.482 "name": "BaseBdev3", 00:17:16.482 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:16.482 "is_configured": true, 00:17:16.482 "data_offset": 0, 00:17:16.482 "data_size": 65536 00:17:16.482 } 00:17:16.482 ] 00:17:16.482 } 00:17:16.482 } 00:17:16.482 }' 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:16.482 BaseBdev2 00:17:16.482 BaseBdev3' 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:16.482 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.742 "name": "NewBaseBdev", 00:17:16.742 "aliases": [ 00:17:16.742 "fcba80f8-e619-4edc-9cf4-a0d8fb14420c" 00:17:16.742 ], 00:17:16.742 "product_name": "Malloc disk", 00:17:16.742 "block_size": 512, 00:17:16.742 "num_blocks": 65536, 00:17:16.742 "uuid": "fcba80f8-e619-4edc-9cf4-a0d8fb14420c", 00:17:16.742 "assigned_rate_limits": { 00:17:16.742 "rw_ios_per_sec": 0, 00:17:16.742 "rw_mbytes_per_sec": 0, 00:17:16.742 "r_mbytes_per_sec": 0, 00:17:16.742 "w_mbytes_per_sec": 0 00:17:16.742 }, 00:17:16.742 "claimed": true, 00:17:16.742 "claim_type": "exclusive_write", 00:17:16.742 "zoned": false, 00:17:16.742 "supported_io_types": { 00:17:16.742 "read": true, 00:17:16.742 "write": true, 00:17:16.742 "unmap": true, 00:17:16.742 "flush": true, 00:17:16.742 "reset": true, 00:17:16.742 "nvme_admin": false, 00:17:16.742 "nvme_io": false, 00:17:16.742 "nvme_io_md": false, 00:17:16.742 "write_zeroes": true, 00:17:16.742 "zcopy": true, 00:17:16.742 "get_zone_info": false, 00:17:16.742 "zone_management": false, 00:17:16.742 "zone_append": false, 00:17:16.742 "compare": false, 00:17:16.742 "compare_and_write": false, 00:17:16.742 "abort": true, 00:17:16.742 "seek_hole": false, 00:17:16.742 "seek_data": false, 00:17:16.742 "copy": true, 00:17:16.742 "nvme_iov_md": false 00:17:16.742 }, 00:17:16.742 "memory_domains": [ 00:17:16.742 { 00:17:16.742 "dma_device_id": "system", 00:17:16.742 "dma_device_type": 1 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.742 "dma_device_type": 2 00:17:16.742 } 00:17:16.742 ], 00:17:16.742 "driver_specific": {} 00:17:16.742 }' 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:16.742 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:17.002 10:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.261 "name": "BaseBdev2", 00:17:17.261 "aliases": [ 00:17:17.261 "07ebeff4-30b9-499e-b1c9-dd9f924b0919" 00:17:17.261 ], 00:17:17.261 "product_name": "Malloc disk", 00:17:17.261 "block_size": 512, 00:17:17.261 "num_blocks": 65536, 00:17:17.261 "uuid": "07ebeff4-30b9-499e-b1c9-dd9f924b0919", 00:17:17.261 "assigned_rate_limits": { 00:17:17.261 "rw_ios_per_sec": 0, 00:17:17.261 "rw_mbytes_per_sec": 0, 00:17:17.261 "r_mbytes_per_sec": 0, 00:17:17.261 "w_mbytes_per_sec": 0 00:17:17.261 }, 00:17:17.261 "claimed": true, 00:17:17.261 "claim_type": "exclusive_write", 00:17:17.261 "zoned": false, 00:17:17.261 "supported_io_types": { 00:17:17.261 "read": true, 00:17:17.261 "write": true, 00:17:17.261 "unmap": true, 00:17:17.261 "flush": true, 00:17:17.261 "reset": true, 00:17:17.261 "nvme_admin": false, 00:17:17.261 "nvme_io": false, 00:17:17.261 "nvme_io_md": false, 00:17:17.261 "write_zeroes": true, 00:17:17.261 "zcopy": true, 00:17:17.261 "get_zone_info": false, 00:17:17.261 "zone_management": false, 00:17:17.261 "zone_append": false, 00:17:17.261 "compare": false, 00:17:17.261 "compare_and_write": false, 00:17:17.261 "abort": true, 00:17:17.261 "seek_hole": false, 00:17:17.261 "seek_data": false, 00:17:17.261 "copy": true, 00:17:17.261 "nvme_iov_md": false 00:17:17.261 }, 00:17:17.261 "memory_domains": [ 00:17:17.261 { 00:17:17.261 "dma_device_id": "system", 00:17:17.261 "dma_device_type": 1 00:17:17.261 }, 00:17:17.261 { 00:17:17.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.261 "dma_device_type": 2 00:17:17.261 } 00:17:17.261 ], 00:17:17.261 "driver_specific": {} 00:17:17.261 }' 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.261 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:17.521 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.780 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.780 "name": "BaseBdev3", 00:17:17.780 "aliases": [ 00:17:17.780 "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a" 00:17:17.780 ], 00:17:17.780 "product_name": "Malloc disk", 00:17:17.780 "block_size": 512, 00:17:17.780 "num_blocks": 65536, 00:17:17.780 "uuid": "4a7a77d7-ab3c-44cc-80ff-0c967df0bb6a", 00:17:17.780 "assigned_rate_limits": { 00:17:17.780 "rw_ios_per_sec": 0, 00:17:17.780 "rw_mbytes_per_sec": 0, 00:17:17.780 "r_mbytes_per_sec": 0, 00:17:17.780 "w_mbytes_per_sec": 0 00:17:17.780 }, 00:17:17.780 "claimed": true, 00:17:17.780 "claim_type": "exclusive_write", 00:17:17.780 "zoned": false, 00:17:17.780 "supported_io_types": { 00:17:17.780 "read": true, 00:17:17.780 "write": true, 00:17:17.780 "unmap": true, 00:17:17.780 "flush": true, 00:17:17.780 "reset": true, 00:17:17.780 "nvme_admin": false, 00:17:17.780 "nvme_io": false, 00:17:17.780 "nvme_io_md": false, 00:17:17.780 "write_zeroes": true, 00:17:17.780 "zcopy": true, 00:17:17.780 "get_zone_info": false, 00:17:17.780 "zone_management": false, 00:17:17.780 "zone_append": false, 00:17:17.780 "compare": false, 00:17:17.780 "compare_and_write": false, 00:17:17.780 "abort": true, 00:17:17.780 "seek_hole": false, 00:17:17.780 "seek_data": false, 00:17:17.780 "copy": true, 00:17:17.780 "nvme_iov_md": false 00:17:17.780 }, 00:17:17.780 "memory_domains": [ 00:17:17.780 { 00:17:17.780 "dma_device_id": "system", 00:17:17.780 "dma_device_type": 1 00:17:17.780 }, 00:17:17.780 { 00:17:17.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.780 "dma_device_type": 2 00:17:17.780 } 00:17:17.780 ], 00:17:17.780 "driver_specific": {} 00:17:17.780 }' 00:17:17.780 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.780 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.039 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.040 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.299 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.299 10:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.299 [2024-07-26 10:27:31.153109] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.300 [2024-07-26 10:27:31.153133] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.300 [2024-07-26 10:27:31.153187] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.300 [2024-07-26 10:27:31.153235] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.300 [2024-07-26 10:27:31.153245] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x191cd60 name Existed_Raid, state offline 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3385711 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3385711 ']' 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3385711 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.300 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3385711 00:17:18.559 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.559 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.559 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3385711' 00:17:18.559 killing process with pid 3385711 00:17:18.559 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3385711 00:17:18.559 [2024-07-26 10:27:31.234078] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.559 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3385711 00:17:18.559 [2024-07-26 10:27:31.258238] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.560 10:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:18.560 00:17:18.560 real 0m27.036s 00:17:18.560 user 0m49.495s 00:17:18.560 sys 0m5.006s 00:17:18.560 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.560 10:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.560 ************************************ 00:17:18.560 END TEST raid_state_function_test 00:17:18.560 ************************************ 00:17:18.819 10:27:31 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:18.819 10:27:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:18.819 10:27:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.819 10:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.819 ************************************ 00:17:18.819 START TEST raid_state_function_test_sb 00:17:18.819 ************************************ 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:18.819 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3391172 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3391172' 00:17:18.820 Process raid pid: 3391172 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3391172 /var/tmp/spdk-raid.sock 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3391172 ']' 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.820 10:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.820 [2024-07-26 10:27:31.587699] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:18.820 [2024-07-26 10:27:31.587755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:18.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:18.820 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:19.079 [2024-07-26 10:27:31.723128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.079 [2024-07-26 10:27:31.767178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.079 [2024-07-26 10:27:31.828718] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.079 [2024-07-26 10:27:31.828754] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.648 10:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.648 10:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:19.648 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:19.907 [2024-07-26 10:27:32.694705] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.907 [2024-07-26 10:27:32.694747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.907 [2024-07-26 10:27:32.694757] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.907 [2024-07-26 10:27:32.694767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.907 [2024-07-26 10:27:32.694775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.907 [2024-07-26 10:27:32.694785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.907 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.167 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.167 "name": "Existed_Raid", 00:17:20.167 "uuid": "3c6bf136-7859-4a09-b690-8604f51803b6", 00:17:20.167 "strip_size_kb": 64, 00:17:20.167 "state": "configuring", 00:17:20.167 "raid_level": "concat", 00:17:20.168 "superblock": true, 00:17:20.168 "num_base_bdevs": 3, 00:17:20.168 "num_base_bdevs_discovered": 0, 00:17:20.168 "num_base_bdevs_operational": 3, 00:17:20.168 "base_bdevs_list": [ 00:17:20.168 { 00:17:20.168 "name": "BaseBdev1", 00:17:20.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.168 "is_configured": false, 00:17:20.168 "data_offset": 0, 00:17:20.168 "data_size": 0 00:17:20.168 }, 00:17:20.168 { 00:17:20.168 "name": "BaseBdev2", 00:17:20.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.168 "is_configured": false, 00:17:20.168 "data_offset": 0, 00:17:20.168 "data_size": 0 00:17:20.168 }, 00:17:20.168 { 00:17:20.168 "name": "BaseBdev3", 00:17:20.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.168 "is_configured": false, 00:17:20.168 "data_offset": 0, 00:17:20.168 "data_size": 0 00:17:20.168 } 00:17:20.168 ] 00:17:20.168 }' 00:17:20.168 10:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.168 10:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.735 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.994 [2024-07-26 10:27:33.657104] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.994 [2024-07-26 10:27:33.657143] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x911b70 name Existed_Raid, state configuring 00:17:20.994 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:20.994 [2024-07-26 10:27:33.885729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.994 [2024-07-26 10:27:33.885757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.994 [2024-07-26 10:27:33.885766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.994 [2024-07-26 10:27:33.885776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.994 [2024-07-26 10:27:33.885784] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.994 [2024-07-26 10:27:33.885793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.253 10:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.253 [2024-07-26 10:27:34.063654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.253 BaseBdev1 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.253 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.512 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.771 [ 00:17:21.771 { 00:17:21.771 "name": "BaseBdev1", 00:17:21.771 "aliases": [ 00:17:21.771 "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39" 00:17:21.771 ], 00:17:21.771 "product_name": "Malloc disk", 00:17:21.771 "block_size": 512, 00:17:21.771 "num_blocks": 65536, 00:17:21.771 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:21.771 "assigned_rate_limits": { 00:17:21.771 "rw_ios_per_sec": 0, 00:17:21.771 "rw_mbytes_per_sec": 0, 00:17:21.771 "r_mbytes_per_sec": 0, 00:17:21.771 "w_mbytes_per_sec": 0 00:17:21.771 }, 00:17:21.771 "claimed": true, 00:17:21.771 "claim_type": "exclusive_write", 00:17:21.771 "zoned": false, 00:17:21.771 "supported_io_types": { 00:17:21.771 "read": true, 00:17:21.771 "write": true, 00:17:21.771 "unmap": true, 00:17:21.771 "flush": true, 00:17:21.771 "reset": true, 00:17:21.771 "nvme_admin": false, 00:17:21.771 "nvme_io": false, 00:17:21.771 "nvme_io_md": false, 00:17:21.771 "write_zeroes": true, 00:17:21.771 "zcopy": true, 00:17:21.771 "get_zone_info": false, 00:17:21.772 "zone_management": false, 00:17:21.772 "zone_append": false, 00:17:21.772 "compare": false, 00:17:21.772 "compare_and_write": false, 00:17:21.772 "abort": true, 00:17:21.772 "seek_hole": false, 00:17:21.772 "seek_data": false, 00:17:21.772 "copy": true, 00:17:21.772 "nvme_iov_md": false 00:17:21.772 }, 00:17:21.772 "memory_domains": [ 00:17:21.772 { 00:17:21.772 "dma_device_id": "system", 00:17:21.772 "dma_device_type": 1 00:17:21.772 }, 00:17:21.772 { 00:17:21.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.772 "dma_device_type": 2 00:17:21.772 } 00:17:21.772 ], 00:17:21.772 "driver_specific": {} 00:17:21.772 } 00:17:21.772 ] 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.772 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.031 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.031 "name": "Existed_Raid", 00:17:22.031 "uuid": "3752f31b-1fe2-4378-9e58-97808d9785b3", 00:17:22.031 "strip_size_kb": 64, 00:17:22.031 "state": "configuring", 00:17:22.031 "raid_level": "concat", 00:17:22.031 "superblock": true, 00:17:22.031 "num_base_bdevs": 3, 00:17:22.031 "num_base_bdevs_discovered": 1, 00:17:22.031 "num_base_bdevs_operational": 3, 00:17:22.031 "base_bdevs_list": [ 00:17:22.031 { 00:17:22.031 "name": "BaseBdev1", 00:17:22.031 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:22.031 "is_configured": true, 00:17:22.031 "data_offset": 2048, 00:17:22.031 "data_size": 63488 00:17:22.031 }, 00:17:22.031 { 00:17:22.031 "name": "BaseBdev2", 00:17:22.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.031 "is_configured": false, 00:17:22.031 "data_offset": 0, 00:17:22.031 "data_size": 0 00:17:22.031 }, 00:17:22.031 { 00:17:22.031 "name": "BaseBdev3", 00:17:22.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.031 "is_configured": false, 00:17:22.031 "data_offset": 0, 00:17:22.031 "data_size": 0 00:17:22.031 } 00:17:22.031 ] 00:17:22.031 }' 00:17:22.031 10:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.031 10:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.599 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.859 [2024-07-26 10:27:35.551560] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.859 [2024-07-26 10:27:35.551597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9114a0 name Existed_Raid, state configuring 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:22.859 [2024-07-26 10:27:35.728075] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.859 [2024-07-26 10:27:35.729417] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.859 [2024-07-26 10:27:35.729448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.859 [2024-07-26 10:27:35.729457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.859 [2024-07-26 10:27:35.729467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.859 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.118 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.118 "name": "Existed_Raid", 00:17:23.118 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:23.118 "strip_size_kb": 64, 00:17:23.118 "state": "configuring", 00:17:23.118 "raid_level": "concat", 00:17:23.118 "superblock": true, 00:17:23.118 "num_base_bdevs": 3, 00:17:23.118 "num_base_bdevs_discovered": 1, 00:17:23.118 "num_base_bdevs_operational": 3, 00:17:23.118 "base_bdevs_list": [ 00:17:23.118 { 00:17:23.118 "name": "BaseBdev1", 00:17:23.118 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:23.118 "is_configured": true, 00:17:23.118 "data_offset": 2048, 00:17:23.118 "data_size": 63488 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "name": "BaseBdev2", 00:17:23.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.118 "is_configured": false, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 0 00:17:23.118 }, 00:17:23.118 { 00:17:23.118 "name": "BaseBdev3", 00:17:23.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.118 "is_configured": false, 00:17:23.118 "data_offset": 0, 00:17:23.118 "data_size": 0 00:17:23.118 } 00:17:23.118 ] 00:17:23.118 }' 00:17:23.118 10:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.118 10:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.685 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.944 [2024-07-26 10:27:36.782061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.944 BaseBdev2 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.944 10:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.202 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.461 [ 00:17:24.461 { 00:17:24.461 "name": "BaseBdev2", 00:17:24.461 "aliases": [ 00:17:24.461 "e4e540b8-4352-4ef6-bce9-fefa4dbc1070" 00:17:24.461 ], 00:17:24.461 "product_name": "Malloc disk", 00:17:24.461 "block_size": 512, 00:17:24.461 "num_blocks": 65536, 00:17:24.461 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:24.461 "assigned_rate_limits": { 00:17:24.461 "rw_ios_per_sec": 0, 00:17:24.461 "rw_mbytes_per_sec": 0, 00:17:24.461 "r_mbytes_per_sec": 0, 00:17:24.461 "w_mbytes_per_sec": 0 00:17:24.461 }, 00:17:24.461 "claimed": true, 00:17:24.461 "claim_type": "exclusive_write", 00:17:24.461 "zoned": false, 00:17:24.461 "supported_io_types": { 00:17:24.461 "read": true, 00:17:24.461 "write": true, 00:17:24.461 "unmap": true, 00:17:24.461 "flush": true, 00:17:24.461 "reset": true, 00:17:24.461 "nvme_admin": false, 00:17:24.461 "nvme_io": false, 00:17:24.461 "nvme_io_md": false, 00:17:24.461 "write_zeroes": true, 00:17:24.461 "zcopy": true, 00:17:24.461 "get_zone_info": false, 00:17:24.461 "zone_management": false, 00:17:24.461 "zone_append": false, 00:17:24.461 "compare": false, 00:17:24.461 "compare_and_write": false, 00:17:24.461 "abort": true, 00:17:24.461 "seek_hole": false, 00:17:24.461 "seek_data": false, 00:17:24.461 "copy": true, 00:17:24.461 "nvme_iov_md": false 00:17:24.461 }, 00:17:24.461 "memory_domains": [ 00:17:24.461 { 00:17:24.461 "dma_device_id": "system", 00:17:24.461 "dma_device_type": 1 00:17:24.461 }, 00:17:24.461 { 00:17:24.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.461 "dma_device_type": 2 00:17:24.461 } 00:17:24.461 ], 00:17:24.461 "driver_specific": {} 00:17:24.461 } 00:17:24.461 ] 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.461 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.721 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.721 "name": "Existed_Raid", 00:17:24.721 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:24.721 "strip_size_kb": 64, 00:17:24.721 "state": "configuring", 00:17:24.721 "raid_level": "concat", 00:17:24.721 "superblock": true, 00:17:24.721 "num_base_bdevs": 3, 00:17:24.721 "num_base_bdevs_discovered": 2, 00:17:24.721 "num_base_bdevs_operational": 3, 00:17:24.721 "base_bdevs_list": [ 00:17:24.721 { 00:17:24.721 "name": "BaseBdev1", 00:17:24.721 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:24.721 "is_configured": true, 00:17:24.721 "data_offset": 2048, 00:17:24.721 "data_size": 63488 00:17:24.721 }, 00:17:24.721 { 00:17:24.721 "name": "BaseBdev2", 00:17:24.721 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:24.721 "is_configured": true, 00:17:24.721 "data_offset": 2048, 00:17:24.721 "data_size": 63488 00:17:24.721 }, 00:17:24.721 { 00:17:24.721 "name": "BaseBdev3", 00:17:24.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.721 "is_configured": false, 00:17:24.721 "data_offset": 0, 00:17:24.721 "data_size": 0 00:17:24.721 } 00:17:24.721 ] 00:17:24.721 }' 00:17:24.721 10:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.721 10:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:25.307 [2024-07-26 10:27:38.196876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.307 [2024-07-26 10:27:38.197023] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xac42d0 00:17:25.307 [2024-07-26 10:27:38.197035] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.307 [2024-07-26 10:27:38.197203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x911a90 00:17:25.307 [2024-07-26 10:27:38.197316] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xac42d0 00:17:25.307 [2024-07-26 10:27:38.197326] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xac42d0 00:17:25.307 [2024-07-26 10:27:38.197410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.615 BaseBdev3 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.615 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.874 [ 00:17:25.874 { 00:17:25.874 "name": "BaseBdev3", 00:17:25.874 "aliases": [ 00:17:25.874 "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3" 00:17:25.874 ], 00:17:25.874 "product_name": "Malloc disk", 00:17:25.874 "block_size": 512, 00:17:25.874 "num_blocks": 65536, 00:17:25.874 "uuid": "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3", 00:17:25.874 "assigned_rate_limits": { 00:17:25.874 "rw_ios_per_sec": 0, 00:17:25.874 "rw_mbytes_per_sec": 0, 00:17:25.874 "r_mbytes_per_sec": 0, 00:17:25.874 "w_mbytes_per_sec": 0 00:17:25.874 }, 00:17:25.874 "claimed": true, 00:17:25.874 "claim_type": "exclusive_write", 00:17:25.874 "zoned": false, 00:17:25.874 "supported_io_types": { 00:17:25.874 "read": true, 00:17:25.874 "write": true, 00:17:25.874 "unmap": true, 00:17:25.874 "flush": true, 00:17:25.874 "reset": true, 00:17:25.874 "nvme_admin": false, 00:17:25.874 "nvme_io": false, 00:17:25.874 "nvme_io_md": false, 00:17:25.874 "write_zeroes": true, 00:17:25.874 "zcopy": true, 00:17:25.874 "get_zone_info": false, 00:17:25.874 "zone_management": false, 00:17:25.874 "zone_append": false, 00:17:25.874 "compare": false, 00:17:25.874 "compare_and_write": false, 00:17:25.874 "abort": true, 00:17:25.874 "seek_hole": false, 00:17:25.874 "seek_data": false, 00:17:25.874 "copy": true, 00:17:25.874 "nvme_iov_md": false 00:17:25.874 }, 00:17:25.874 "memory_domains": [ 00:17:25.874 { 00:17:25.874 "dma_device_id": "system", 00:17:25.874 "dma_device_type": 1 00:17:25.874 }, 00:17:25.874 { 00:17:25.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.874 "dma_device_type": 2 00:17:25.874 } 00:17:25.874 ], 00:17:25.874 "driver_specific": {} 00:17:25.874 } 00:17:25.874 ] 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.874 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.874 "name": "Existed_Raid", 00:17:25.874 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:25.874 "strip_size_kb": 64, 00:17:25.874 "state": "online", 00:17:25.874 "raid_level": "concat", 00:17:25.874 "superblock": true, 00:17:25.874 "num_base_bdevs": 3, 00:17:25.874 "num_base_bdevs_discovered": 3, 00:17:25.874 "num_base_bdevs_operational": 3, 00:17:25.874 "base_bdevs_list": [ 00:17:25.874 { 00:17:25.874 "name": "BaseBdev1", 00:17:25.874 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:25.874 "is_configured": true, 00:17:25.874 "data_offset": 2048, 00:17:25.874 "data_size": 63488 00:17:25.874 }, 00:17:25.874 { 00:17:25.874 "name": "BaseBdev2", 00:17:25.874 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:25.874 "is_configured": true, 00:17:25.874 "data_offset": 2048, 00:17:25.874 "data_size": 63488 00:17:25.874 }, 00:17:25.874 { 00:17:25.874 "name": "BaseBdev3", 00:17:25.874 "uuid": "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3", 00:17:25.874 "is_configured": true, 00:17:25.875 "data_offset": 2048, 00:17:25.875 "data_size": 63488 00:17:25.875 } 00:17:25.875 ] 00:17:25.875 }' 00:17:25.875 10:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.875 10:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.442 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.701 [2024-07-26 10:27:39.496571] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.701 "name": "Existed_Raid", 00:17:26.701 "aliases": [ 00:17:26.701 "1528728d-0074-4838-ba97-caeb4b920a05" 00:17:26.701 ], 00:17:26.701 "product_name": "Raid Volume", 00:17:26.701 "block_size": 512, 00:17:26.701 "num_blocks": 190464, 00:17:26.701 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:26.701 "assigned_rate_limits": { 00:17:26.701 "rw_ios_per_sec": 0, 00:17:26.701 "rw_mbytes_per_sec": 0, 00:17:26.701 "r_mbytes_per_sec": 0, 00:17:26.701 "w_mbytes_per_sec": 0 00:17:26.701 }, 00:17:26.701 "claimed": false, 00:17:26.701 "zoned": false, 00:17:26.701 "supported_io_types": { 00:17:26.701 "read": true, 00:17:26.701 "write": true, 00:17:26.701 "unmap": true, 00:17:26.701 "flush": true, 00:17:26.701 "reset": true, 00:17:26.701 "nvme_admin": false, 00:17:26.701 "nvme_io": false, 00:17:26.701 "nvme_io_md": false, 00:17:26.701 "write_zeroes": true, 00:17:26.701 "zcopy": false, 00:17:26.701 "get_zone_info": false, 00:17:26.701 "zone_management": false, 00:17:26.701 "zone_append": false, 00:17:26.701 "compare": false, 00:17:26.701 "compare_and_write": false, 00:17:26.701 "abort": false, 00:17:26.701 "seek_hole": false, 00:17:26.701 "seek_data": false, 00:17:26.701 "copy": false, 00:17:26.701 "nvme_iov_md": false 00:17:26.701 }, 00:17:26.701 "memory_domains": [ 00:17:26.701 { 00:17:26.701 "dma_device_id": "system", 00:17:26.701 "dma_device_type": 1 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.701 "dma_device_type": 2 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "dma_device_id": "system", 00:17:26.701 "dma_device_type": 1 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.701 "dma_device_type": 2 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "dma_device_id": "system", 00:17:26.701 "dma_device_type": 1 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.701 "dma_device_type": 2 00:17:26.701 } 00:17:26.701 ], 00:17:26.701 "driver_specific": { 00:17:26.701 "raid": { 00:17:26.701 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:26.701 "strip_size_kb": 64, 00:17:26.701 "state": "online", 00:17:26.701 "raid_level": "concat", 00:17:26.701 "superblock": true, 00:17:26.701 "num_base_bdevs": 3, 00:17:26.701 "num_base_bdevs_discovered": 3, 00:17:26.701 "num_base_bdevs_operational": 3, 00:17:26.701 "base_bdevs_list": [ 00:17:26.701 { 00:17:26.701 "name": "BaseBdev1", 00:17:26.701 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:26.701 "is_configured": true, 00:17:26.701 "data_offset": 2048, 00:17:26.701 "data_size": 63488 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "name": "BaseBdev2", 00:17:26.701 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:26.701 "is_configured": true, 00:17:26.701 "data_offset": 2048, 00:17:26.701 "data_size": 63488 00:17:26.701 }, 00:17:26.701 { 00:17:26.701 "name": "BaseBdev3", 00:17:26.701 "uuid": "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3", 00:17:26.701 "is_configured": true, 00:17:26.701 "data_offset": 2048, 00:17:26.701 "data_size": 63488 00:17:26.701 } 00:17:26.701 ] 00:17:26.701 } 00:17:26.701 } 00:17:26.701 }' 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:26.701 BaseBdev2 00:17:26.701 BaseBdev3' 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:26.701 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.960 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.960 "name": "BaseBdev1", 00:17:26.960 "aliases": [ 00:17:26.960 "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39" 00:17:26.960 ], 00:17:26.960 "product_name": "Malloc disk", 00:17:26.960 "block_size": 512, 00:17:26.960 "num_blocks": 65536, 00:17:26.960 "uuid": "8f37e1a5-35b2-49d9-a11d-75c6e9ee1c39", 00:17:26.960 "assigned_rate_limits": { 00:17:26.960 "rw_ios_per_sec": 0, 00:17:26.960 "rw_mbytes_per_sec": 0, 00:17:26.960 "r_mbytes_per_sec": 0, 00:17:26.960 "w_mbytes_per_sec": 0 00:17:26.960 }, 00:17:26.960 "claimed": true, 00:17:26.960 "claim_type": "exclusive_write", 00:17:26.960 "zoned": false, 00:17:26.960 "supported_io_types": { 00:17:26.960 "read": true, 00:17:26.960 "write": true, 00:17:26.960 "unmap": true, 00:17:26.960 "flush": true, 00:17:26.960 "reset": true, 00:17:26.960 "nvme_admin": false, 00:17:26.960 "nvme_io": false, 00:17:26.960 "nvme_io_md": false, 00:17:26.960 "write_zeroes": true, 00:17:26.960 "zcopy": true, 00:17:26.960 "get_zone_info": false, 00:17:26.960 "zone_management": false, 00:17:26.960 "zone_append": false, 00:17:26.960 "compare": false, 00:17:26.960 "compare_and_write": false, 00:17:26.960 "abort": true, 00:17:26.960 "seek_hole": false, 00:17:26.960 "seek_data": false, 00:17:26.960 "copy": true, 00:17:26.960 "nvme_iov_md": false 00:17:26.960 }, 00:17:26.960 "memory_domains": [ 00:17:26.960 { 00:17:26.960 "dma_device_id": "system", 00:17:26.960 "dma_device_type": 1 00:17:26.960 }, 00:17:26.960 { 00:17:26.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.960 "dma_device_type": 2 00:17:26.960 } 00:17:26.960 ], 00:17:26.960 "driver_specific": {} 00:17:26.960 }' 00:17:26.960 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.960 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.960 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:26.960 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.219 10:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.219 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.219 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.219 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.219 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.219 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.477 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.477 "name": "BaseBdev2", 00:17:27.477 "aliases": [ 00:17:27.477 "e4e540b8-4352-4ef6-bce9-fefa4dbc1070" 00:17:27.477 ], 00:17:27.477 "product_name": "Malloc disk", 00:17:27.477 "block_size": 512, 00:17:27.477 "num_blocks": 65536, 00:17:27.477 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:27.477 "assigned_rate_limits": { 00:17:27.477 "rw_ios_per_sec": 0, 00:17:27.477 "rw_mbytes_per_sec": 0, 00:17:27.477 "r_mbytes_per_sec": 0, 00:17:27.477 "w_mbytes_per_sec": 0 00:17:27.477 }, 00:17:27.477 "claimed": true, 00:17:27.477 "claim_type": "exclusive_write", 00:17:27.477 "zoned": false, 00:17:27.477 "supported_io_types": { 00:17:27.477 "read": true, 00:17:27.477 "write": true, 00:17:27.477 "unmap": true, 00:17:27.477 "flush": true, 00:17:27.477 "reset": true, 00:17:27.477 "nvme_admin": false, 00:17:27.477 "nvme_io": false, 00:17:27.477 "nvme_io_md": false, 00:17:27.477 "write_zeroes": true, 00:17:27.477 "zcopy": true, 00:17:27.477 "get_zone_info": false, 00:17:27.477 "zone_management": false, 00:17:27.477 "zone_append": false, 00:17:27.477 "compare": false, 00:17:27.477 "compare_and_write": false, 00:17:27.477 "abort": true, 00:17:27.477 "seek_hole": false, 00:17:27.477 "seek_data": false, 00:17:27.477 "copy": true, 00:17:27.477 "nvme_iov_md": false 00:17:27.477 }, 00:17:27.477 "memory_domains": [ 00:17:27.477 { 00:17:27.477 "dma_device_id": "system", 00:17:27.477 "dma_device_type": 1 00:17:27.477 }, 00:17:27.477 { 00:17:27.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.477 "dma_device_type": 2 00:17:27.477 } 00:17:27.477 ], 00:17:27.477 "driver_specific": {} 00:17:27.477 }' 00:17:27.477 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.477 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.736 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:27.995 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.995 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.995 "name": "BaseBdev3", 00:17:27.995 "aliases": [ 00:17:27.995 "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3" 00:17:27.995 ], 00:17:27.995 "product_name": "Malloc disk", 00:17:27.995 "block_size": 512, 00:17:27.995 "num_blocks": 65536, 00:17:27.995 "uuid": "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3", 00:17:27.995 "assigned_rate_limits": { 00:17:27.995 "rw_ios_per_sec": 0, 00:17:27.995 "rw_mbytes_per_sec": 0, 00:17:27.995 "r_mbytes_per_sec": 0, 00:17:27.995 "w_mbytes_per_sec": 0 00:17:27.995 }, 00:17:27.995 "claimed": true, 00:17:27.995 "claim_type": "exclusive_write", 00:17:27.995 "zoned": false, 00:17:27.995 "supported_io_types": { 00:17:27.995 "read": true, 00:17:27.995 "write": true, 00:17:27.995 "unmap": true, 00:17:27.995 "flush": true, 00:17:27.995 "reset": true, 00:17:27.995 "nvme_admin": false, 00:17:27.995 "nvme_io": false, 00:17:27.995 "nvme_io_md": false, 00:17:27.995 "write_zeroes": true, 00:17:27.995 "zcopy": true, 00:17:27.995 "get_zone_info": false, 00:17:27.995 "zone_management": false, 00:17:27.995 "zone_append": false, 00:17:27.995 "compare": false, 00:17:27.995 "compare_and_write": false, 00:17:27.995 "abort": true, 00:17:27.995 "seek_hole": false, 00:17:27.995 "seek_data": false, 00:17:27.995 "copy": true, 00:17:27.995 "nvme_iov_md": false 00:17:27.995 }, 00:17:27.995 "memory_domains": [ 00:17:27.995 { 00:17:27.995 "dma_device_id": "system", 00:17:27.995 "dma_device_type": 1 00:17:27.995 }, 00:17:27.995 { 00:17:27.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.995 "dma_device_type": 2 00:17:27.995 } 00:17:27.995 ], 00:17:27.995 "driver_specific": {} 00:17:27.995 }' 00:17:27.995 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.253 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.253 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.253 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.253 10:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.253 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.253 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.253 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.253 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.253 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:28.512 [2024-07-26 10:27:41.345421] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.512 [2024-07-26 10:27:41.345447] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.512 [2024-07-26 10:27:41.345486] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:28.512 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.513 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.771 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.771 "name": "Existed_Raid", 00:17:28.771 "uuid": "1528728d-0074-4838-ba97-caeb4b920a05", 00:17:28.771 "strip_size_kb": 64, 00:17:28.772 "state": "offline", 00:17:28.772 "raid_level": "concat", 00:17:28.772 "superblock": true, 00:17:28.772 "num_base_bdevs": 3, 00:17:28.772 "num_base_bdevs_discovered": 2, 00:17:28.772 "num_base_bdevs_operational": 2, 00:17:28.772 "base_bdevs_list": [ 00:17:28.772 { 00:17:28.772 "name": null, 00:17:28.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.772 "is_configured": false, 00:17:28.772 "data_offset": 2048, 00:17:28.772 "data_size": 63488 00:17:28.772 }, 00:17:28.772 { 00:17:28.772 "name": "BaseBdev2", 00:17:28.772 "uuid": "e4e540b8-4352-4ef6-bce9-fefa4dbc1070", 00:17:28.772 "is_configured": true, 00:17:28.772 "data_offset": 2048, 00:17:28.772 "data_size": 63488 00:17:28.772 }, 00:17:28.772 { 00:17:28.772 "name": "BaseBdev3", 00:17:28.772 "uuid": "f48f1a5d-e292-4020-ba5a-2a40bb8e44e3", 00:17:28.772 "is_configured": true, 00:17:28.772 "data_offset": 2048, 00:17:28.772 "data_size": 63488 00:17:28.772 } 00:17:28.772 ] 00:17:28.772 }' 00:17:28.772 10:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.772 10:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.339 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:29.339 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:29.339 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.339 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:29.598 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:29.598 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.598 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:29.598 [2024-07-26 10:27:42.477364] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.856 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:29.856 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:29.856 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.856 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:29.857 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:29.857 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.857 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:30.115 [2024-07-26 10:27:42.944870] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.115 [2024-07-26 10:27:42.944914] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xac42d0 name Existed_Raid, state offline 00:17:30.115 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.115 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.115 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.115 10:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:30.374 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:30.631 BaseBdev2 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:30.631 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.889 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.148 [ 00:17:31.148 { 00:17:31.148 "name": "BaseBdev2", 00:17:31.148 "aliases": [ 00:17:31.148 "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60" 00:17:31.148 ], 00:17:31.148 "product_name": "Malloc disk", 00:17:31.148 "block_size": 512, 00:17:31.148 "num_blocks": 65536, 00:17:31.148 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:31.148 "assigned_rate_limits": { 00:17:31.148 "rw_ios_per_sec": 0, 00:17:31.148 "rw_mbytes_per_sec": 0, 00:17:31.148 "r_mbytes_per_sec": 0, 00:17:31.148 "w_mbytes_per_sec": 0 00:17:31.148 }, 00:17:31.148 "claimed": false, 00:17:31.148 "zoned": false, 00:17:31.148 "supported_io_types": { 00:17:31.148 "read": true, 00:17:31.148 "write": true, 00:17:31.148 "unmap": true, 00:17:31.148 "flush": true, 00:17:31.148 "reset": true, 00:17:31.148 "nvme_admin": false, 00:17:31.148 "nvme_io": false, 00:17:31.148 "nvme_io_md": false, 00:17:31.148 "write_zeroes": true, 00:17:31.148 "zcopy": true, 00:17:31.148 "get_zone_info": false, 00:17:31.148 "zone_management": false, 00:17:31.148 "zone_append": false, 00:17:31.148 "compare": false, 00:17:31.148 "compare_and_write": false, 00:17:31.148 "abort": true, 00:17:31.148 "seek_hole": false, 00:17:31.148 "seek_data": false, 00:17:31.148 "copy": true, 00:17:31.148 "nvme_iov_md": false 00:17:31.148 }, 00:17:31.148 "memory_domains": [ 00:17:31.148 { 00:17:31.148 "dma_device_id": "system", 00:17:31.148 "dma_device_type": 1 00:17:31.148 }, 00:17:31.148 { 00:17:31.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.148 "dma_device_type": 2 00:17:31.148 } 00:17:31.148 ], 00:17:31.148 "driver_specific": {} 00:17:31.148 } 00:17:31.148 ] 00:17:31.148 10:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:31.148 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:31.148 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.148 10:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.407 BaseBdev3 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.407 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.664 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.923 [ 00:17:31.923 { 00:17:31.923 "name": "BaseBdev3", 00:17:31.923 "aliases": [ 00:17:31.923 "ab536998-3e2a-4115-a3f2-a66187e79e57" 00:17:31.923 ], 00:17:31.923 "product_name": "Malloc disk", 00:17:31.923 "block_size": 512, 00:17:31.923 "num_blocks": 65536, 00:17:31.923 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:31.923 "assigned_rate_limits": { 00:17:31.923 "rw_ios_per_sec": 0, 00:17:31.923 "rw_mbytes_per_sec": 0, 00:17:31.923 "r_mbytes_per_sec": 0, 00:17:31.923 "w_mbytes_per_sec": 0 00:17:31.923 }, 00:17:31.923 "claimed": false, 00:17:31.923 "zoned": false, 00:17:31.923 "supported_io_types": { 00:17:31.923 "read": true, 00:17:31.923 "write": true, 00:17:31.923 "unmap": true, 00:17:31.923 "flush": true, 00:17:31.923 "reset": true, 00:17:31.923 "nvme_admin": false, 00:17:31.923 "nvme_io": false, 00:17:31.923 "nvme_io_md": false, 00:17:31.923 "write_zeroes": true, 00:17:31.923 "zcopy": true, 00:17:31.923 "get_zone_info": false, 00:17:31.923 "zone_management": false, 00:17:31.923 "zone_append": false, 00:17:31.923 "compare": false, 00:17:31.923 "compare_and_write": false, 00:17:31.923 "abort": true, 00:17:31.923 "seek_hole": false, 00:17:31.923 "seek_data": false, 00:17:31.923 "copy": true, 00:17:31.923 "nvme_iov_md": false 00:17:31.923 }, 00:17:31.923 "memory_domains": [ 00:17:31.923 { 00:17:31.923 "dma_device_id": "system", 00:17:31.923 "dma_device_type": 1 00:17:31.923 }, 00:17:31.923 { 00:17:31.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.923 "dma_device_type": 2 00:17:31.923 } 00:17:31.923 ], 00:17:31.923 "driver_specific": {} 00:17:31.923 } 00:17:31.923 ] 00:17:31.923 10:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:31.924 [2024-07-26 10:27:44.792336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.924 [2024-07-26 10:27:44.792371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.924 [2024-07-26 10:27:44.792390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.924 [2024-07-26 10:27:44.793596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.924 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.183 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.183 10:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.184 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.184 "name": "Existed_Raid", 00:17:32.184 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:32.184 "strip_size_kb": 64, 00:17:32.184 "state": "configuring", 00:17:32.184 "raid_level": "concat", 00:17:32.184 "superblock": true, 00:17:32.184 "num_base_bdevs": 3, 00:17:32.184 "num_base_bdevs_discovered": 2, 00:17:32.184 "num_base_bdevs_operational": 3, 00:17:32.184 "base_bdevs_list": [ 00:17:32.184 { 00:17:32.184 "name": "BaseBdev1", 00:17:32.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.184 "is_configured": false, 00:17:32.184 "data_offset": 0, 00:17:32.184 "data_size": 0 00:17:32.184 }, 00:17:32.184 { 00:17:32.184 "name": "BaseBdev2", 00:17:32.184 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:32.184 "is_configured": true, 00:17:32.184 "data_offset": 2048, 00:17:32.184 "data_size": 63488 00:17:32.184 }, 00:17:32.184 { 00:17:32.184 "name": "BaseBdev3", 00:17:32.184 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:32.184 "is_configured": true, 00:17:32.184 "data_offset": 2048, 00:17:32.184 "data_size": 63488 00:17:32.184 } 00:17:32.184 ] 00:17:32.184 }' 00:17:32.184 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.184 10:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:33.010 [2024-07-26 10:27:45.855112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.010 10:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.269 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.269 "name": "Existed_Raid", 00:17:33.269 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:33.269 "strip_size_kb": 64, 00:17:33.269 "state": "configuring", 00:17:33.269 "raid_level": "concat", 00:17:33.269 "superblock": true, 00:17:33.269 "num_base_bdevs": 3, 00:17:33.269 "num_base_bdevs_discovered": 1, 00:17:33.269 "num_base_bdevs_operational": 3, 00:17:33.269 "base_bdevs_list": [ 00:17:33.269 { 00:17:33.269 "name": "BaseBdev1", 00:17:33.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.269 "is_configured": false, 00:17:33.269 "data_offset": 0, 00:17:33.269 "data_size": 0 00:17:33.269 }, 00:17:33.269 { 00:17:33.269 "name": null, 00:17:33.269 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:33.269 "is_configured": false, 00:17:33.269 "data_offset": 2048, 00:17:33.269 "data_size": 63488 00:17:33.269 }, 00:17:33.269 { 00:17:33.269 "name": "BaseBdev3", 00:17:33.269 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:33.269 "is_configured": true, 00:17:33.269 "data_offset": 2048, 00:17:33.269 "data_size": 63488 00:17:33.269 } 00:17:33.269 ] 00:17:33.269 }' 00:17:33.269 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.269 10:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.835 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:34.094 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:34.094 10:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.353 [2024-07-26 10:27:47.125695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.353 BaseBdev1 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:34.353 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.611 [ 00:17:34.611 { 00:17:34.611 "name": "BaseBdev1", 00:17:34.611 "aliases": [ 00:17:34.611 "64c2a896-10d1-46be-8e29-ec6a40464093" 00:17:34.611 ], 00:17:34.611 "product_name": "Malloc disk", 00:17:34.611 "block_size": 512, 00:17:34.611 "num_blocks": 65536, 00:17:34.611 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:34.611 "assigned_rate_limits": { 00:17:34.611 "rw_ios_per_sec": 0, 00:17:34.611 "rw_mbytes_per_sec": 0, 00:17:34.611 "r_mbytes_per_sec": 0, 00:17:34.611 "w_mbytes_per_sec": 0 00:17:34.611 }, 00:17:34.611 "claimed": true, 00:17:34.611 "claim_type": "exclusive_write", 00:17:34.611 "zoned": false, 00:17:34.611 "supported_io_types": { 00:17:34.611 "read": true, 00:17:34.611 "write": true, 00:17:34.611 "unmap": true, 00:17:34.611 "flush": true, 00:17:34.611 "reset": true, 00:17:34.611 "nvme_admin": false, 00:17:34.611 "nvme_io": false, 00:17:34.611 "nvme_io_md": false, 00:17:34.611 "write_zeroes": true, 00:17:34.611 "zcopy": true, 00:17:34.611 "get_zone_info": false, 00:17:34.611 "zone_management": false, 00:17:34.611 "zone_append": false, 00:17:34.611 "compare": false, 00:17:34.611 "compare_and_write": false, 00:17:34.611 "abort": true, 00:17:34.611 "seek_hole": false, 00:17:34.611 "seek_data": false, 00:17:34.611 "copy": true, 00:17:34.611 "nvme_iov_md": false 00:17:34.611 }, 00:17:34.611 "memory_domains": [ 00:17:34.611 { 00:17:34.611 "dma_device_id": "system", 00:17:34.611 "dma_device_type": 1 00:17:34.611 }, 00:17:34.611 { 00:17:34.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.611 "dma_device_type": 2 00:17:34.611 } 00:17:34.611 ], 00:17:34.611 "driver_specific": {} 00:17:34.611 } 00:17:34.611 ] 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.611 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.870 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.870 "name": "Existed_Raid", 00:17:34.870 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:34.870 "strip_size_kb": 64, 00:17:34.870 "state": "configuring", 00:17:34.870 "raid_level": "concat", 00:17:34.870 "superblock": true, 00:17:34.870 "num_base_bdevs": 3, 00:17:34.870 "num_base_bdevs_discovered": 2, 00:17:34.870 "num_base_bdevs_operational": 3, 00:17:34.870 "base_bdevs_list": [ 00:17:34.870 { 00:17:34.870 "name": "BaseBdev1", 00:17:34.870 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:34.870 "is_configured": true, 00:17:34.870 "data_offset": 2048, 00:17:34.870 "data_size": 63488 00:17:34.870 }, 00:17:34.870 { 00:17:34.870 "name": null, 00:17:34.870 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:34.870 "is_configured": false, 00:17:34.870 "data_offset": 2048, 00:17:34.870 "data_size": 63488 00:17:34.870 }, 00:17:34.870 { 00:17:34.870 "name": "BaseBdev3", 00:17:34.870 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:34.870 "is_configured": true, 00:17:34.870 "data_offset": 2048, 00:17:34.870 "data_size": 63488 00:17:34.870 } 00:17:34.870 ] 00:17:34.870 }' 00:17:34.870 10:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.870 10:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.437 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.437 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:35.696 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:35.696 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:35.955 [2024-07-26 10:27:48.741992] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.955 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.215 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.215 "name": "Existed_Raid", 00:17:36.215 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:36.215 "strip_size_kb": 64, 00:17:36.215 "state": "configuring", 00:17:36.215 "raid_level": "concat", 00:17:36.215 "superblock": true, 00:17:36.215 "num_base_bdevs": 3, 00:17:36.215 "num_base_bdevs_discovered": 1, 00:17:36.215 "num_base_bdevs_operational": 3, 00:17:36.215 "base_bdevs_list": [ 00:17:36.215 { 00:17:36.215 "name": "BaseBdev1", 00:17:36.215 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:36.215 "is_configured": true, 00:17:36.215 "data_offset": 2048, 00:17:36.215 "data_size": 63488 00:17:36.215 }, 00:17:36.215 { 00:17:36.215 "name": null, 00:17:36.215 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:36.215 "is_configured": false, 00:17:36.215 "data_offset": 2048, 00:17:36.215 "data_size": 63488 00:17:36.215 }, 00:17:36.215 { 00:17:36.215 "name": null, 00:17:36.215 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:36.215 "is_configured": false, 00:17:36.215 "data_offset": 2048, 00:17:36.215 "data_size": 63488 00:17:36.215 } 00:17:36.215 ] 00:17:36.215 }' 00:17:36.215 10:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.215 10:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.782 10:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.782 10:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.041 10:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:37.041 10:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:37.300 [2024-07-26 10:27:49.989339] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.300 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.559 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.559 "name": "Existed_Raid", 00:17:37.559 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:37.559 "strip_size_kb": 64, 00:17:37.559 "state": "configuring", 00:17:37.559 "raid_level": "concat", 00:17:37.559 "superblock": true, 00:17:37.559 "num_base_bdevs": 3, 00:17:37.559 "num_base_bdevs_discovered": 2, 00:17:37.559 "num_base_bdevs_operational": 3, 00:17:37.559 "base_bdevs_list": [ 00:17:37.559 { 00:17:37.559 "name": "BaseBdev1", 00:17:37.559 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:37.559 "is_configured": true, 00:17:37.559 "data_offset": 2048, 00:17:37.559 "data_size": 63488 00:17:37.559 }, 00:17:37.559 { 00:17:37.559 "name": null, 00:17:37.559 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:37.559 "is_configured": false, 00:17:37.559 "data_offset": 2048, 00:17:37.559 "data_size": 63488 00:17:37.559 }, 00:17:37.559 { 00:17:37.559 "name": "BaseBdev3", 00:17:37.559 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:37.559 "is_configured": true, 00:17:37.559 "data_offset": 2048, 00:17:37.559 "data_size": 63488 00:17:37.559 } 00:17:37.559 ] 00:17:37.559 }' 00:17:37.559 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.559 10:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.127 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:38.127 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.127 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:38.127 10:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:38.419 [2024-07-26 10:27:51.196537] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.419 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.679 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.679 "name": "Existed_Raid", 00:17:38.679 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:38.679 "strip_size_kb": 64, 00:17:38.679 "state": "configuring", 00:17:38.679 "raid_level": "concat", 00:17:38.679 "superblock": true, 00:17:38.679 "num_base_bdevs": 3, 00:17:38.679 "num_base_bdevs_discovered": 1, 00:17:38.679 "num_base_bdevs_operational": 3, 00:17:38.679 "base_bdevs_list": [ 00:17:38.679 { 00:17:38.679 "name": null, 00:17:38.679 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:38.679 "is_configured": false, 00:17:38.679 "data_offset": 2048, 00:17:38.679 "data_size": 63488 00:17:38.679 }, 00:17:38.679 { 00:17:38.679 "name": null, 00:17:38.679 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:38.679 "is_configured": false, 00:17:38.679 "data_offset": 2048, 00:17:38.679 "data_size": 63488 00:17:38.679 }, 00:17:38.679 { 00:17:38.679 "name": "BaseBdev3", 00:17:38.679 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:38.679 "is_configured": true, 00:17:38.679 "data_offset": 2048, 00:17:38.679 "data_size": 63488 00:17:38.679 } 00:17:38.679 ] 00:17:38.679 }' 00:17:38.679 10:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.679 10:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.245 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:39.245 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.504 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:39.504 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:39.763 [2024-07-26 10:27:52.470020] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.763 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.022 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.022 "name": "Existed_Raid", 00:17:40.022 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:40.022 "strip_size_kb": 64, 00:17:40.022 "state": "configuring", 00:17:40.022 "raid_level": "concat", 00:17:40.022 "superblock": true, 00:17:40.022 "num_base_bdevs": 3, 00:17:40.022 "num_base_bdevs_discovered": 2, 00:17:40.022 "num_base_bdevs_operational": 3, 00:17:40.022 "base_bdevs_list": [ 00:17:40.022 { 00:17:40.022 "name": null, 00:17:40.022 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:40.022 "is_configured": false, 00:17:40.022 "data_offset": 2048, 00:17:40.022 "data_size": 63488 00:17:40.022 }, 00:17:40.022 { 00:17:40.022 "name": "BaseBdev2", 00:17:40.022 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:40.022 "is_configured": true, 00:17:40.022 "data_offset": 2048, 00:17:40.022 "data_size": 63488 00:17:40.022 }, 00:17:40.022 { 00:17:40.022 "name": "BaseBdev3", 00:17:40.022 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:40.022 "is_configured": true, 00:17:40.022 "data_offset": 2048, 00:17:40.022 "data_size": 63488 00:17:40.022 } 00:17:40.022 ] 00:17:40.022 }' 00:17:40.022 10:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.022 10:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.589 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.589 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:40.851 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:40.851 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:40.851 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.851 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 64c2a896-10d1-46be-8e29-ec6a40464093 00:17:41.109 [2024-07-26 10:27:53.953316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:41.109 [2024-07-26 10:27:53.953456] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xac58f0 00:17:41.109 [2024-07-26 10:27:53.953469] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:41.109 [2024-07-26 10:27:53.953624] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x90c6e0 00:17:41.109 [2024-07-26 10:27:53.953728] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xac58f0 00:17:41.109 [2024-07-26 10:27:53.953738] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xac58f0 00:17:41.109 [2024-07-26 10:27:53.953820] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.109 NewBaseBdev 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.109 10:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.367 10:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:41.626 [ 00:17:41.626 { 00:17:41.626 "name": "NewBaseBdev", 00:17:41.626 "aliases": [ 00:17:41.626 "64c2a896-10d1-46be-8e29-ec6a40464093" 00:17:41.626 ], 00:17:41.626 "product_name": "Malloc disk", 00:17:41.626 "block_size": 512, 00:17:41.626 "num_blocks": 65536, 00:17:41.626 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:41.626 "assigned_rate_limits": { 00:17:41.626 "rw_ios_per_sec": 0, 00:17:41.626 "rw_mbytes_per_sec": 0, 00:17:41.626 "r_mbytes_per_sec": 0, 00:17:41.626 "w_mbytes_per_sec": 0 00:17:41.626 }, 00:17:41.626 "claimed": true, 00:17:41.626 "claim_type": "exclusive_write", 00:17:41.626 "zoned": false, 00:17:41.626 "supported_io_types": { 00:17:41.626 "read": true, 00:17:41.626 "write": true, 00:17:41.626 "unmap": true, 00:17:41.626 "flush": true, 00:17:41.626 "reset": true, 00:17:41.626 "nvme_admin": false, 00:17:41.626 "nvme_io": false, 00:17:41.626 "nvme_io_md": false, 00:17:41.626 "write_zeroes": true, 00:17:41.626 "zcopy": true, 00:17:41.626 "get_zone_info": false, 00:17:41.626 "zone_management": false, 00:17:41.626 "zone_append": false, 00:17:41.626 "compare": false, 00:17:41.626 "compare_and_write": false, 00:17:41.626 "abort": true, 00:17:41.626 "seek_hole": false, 00:17:41.626 "seek_data": false, 00:17:41.626 "copy": true, 00:17:41.626 "nvme_iov_md": false 00:17:41.626 }, 00:17:41.626 "memory_domains": [ 00:17:41.626 { 00:17:41.626 "dma_device_id": "system", 00:17:41.626 "dma_device_type": 1 00:17:41.626 }, 00:17:41.626 { 00:17:41.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.626 "dma_device_type": 2 00:17:41.626 } 00:17:41.626 ], 00:17:41.626 "driver_specific": {} 00:17:41.626 } 00:17:41.626 ] 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.627 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.885 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.885 "name": "Existed_Raid", 00:17:41.885 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:41.885 "strip_size_kb": 64, 00:17:41.885 "state": "online", 00:17:41.885 "raid_level": "concat", 00:17:41.885 "superblock": true, 00:17:41.885 "num_base_bdevs": 3, 00:17:41.885 "num_base_bdevs_discovered": 3, 00:17:41.885 "num_base_bdevs_operational": 3, 00:17:41.885 "base_bdevs_list": [ 00:17:41.885 { 00:17:41.885 "name": "NewBaseBdev", 00:17:41.885 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:41.885 "is_configured": true, 00:17:41.885 "data_offset": 2048, 00:17:41.885 "data_size": 63488 00:17:41.885 }, 00:17:41.885 { 00:17:41.885 "name": "BaseBdev2", 00:17:41.885 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:41.885 "is_configured": true, 00:17:41.885 "data_offset": 2048, 00:17:41.885 "data_size": 63488 00:17:41.885 }, 00:17:41.885 { 00:17:41.885 "name": "BaseBdev3", 00:17:41.885 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:41.885 "is_configured": true, 00:17:41.885 "data_offset": 2048, 00:17:41.885 "data_size": 63488 00:17:41.885 } 00:17:41.885 ] 00:17:41.885 }' 00:17:41.885 10:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.885 10:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:42.452 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.711 [2024-07-26 10:27:55.429615] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.711 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.711 "name": "Existed_Raid", 00:17:42.711 "aliases": [ 00:17:42.711 "6f464bdd-f358-417e-a185-4b72ff60245a" 00:17:42.711 ], 00:17:42.711 "product_name": "Raid Volume", 00:17:42.711 "block_size": 512, 00:17:42.711 "num_blocks": 190464, 00:17:42.711 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:42.711 "assigned_rate_limits": { 00:17:42.711 "rw_ios_per_sec": 0, 00:17:42.711 "rw_mbytes_per_sec": 0, 00:17:42.711 "r_mbytes_per_sec": 0, 00:17:42.711 "w_mbytes_per_sec": 0 00:17:42.711 }, 00:17:42.711 "claimed": false, 00:17:42.711 "zoned": false, 00:17:42.711 "supported_io_types": { 00:17:42.711 "read": true, 00:17:42.711 "write": true, 00:17:42.711 "unmap": true, 00:17:42.711 "flush": true, 00:17:42.711 "reset": true, 00:17:42.711 "nvme_admin": false, 00:17:42.711 "nvme_io": false, 00:17:42.711 "nvme_io_md": false, 00:17:42.711 "write_zeroes": true, 00:17:42.711 "zcopy": false, 00:17:42.711 "get_zone_info": false, 00:17:42.711 "zone_management": false, 00:17:42.711 "zone_append": false, 00:17:42.711 "compare": false, 00:17:42.711 "compare_and_write": false, 00:17:42.711 "abort": false, 00:17:42.711 "seek_hole": false, 00:17:42.711 "seek_data": false, 00:17:42.711 "copy": false, 00:17:42.711 "nvme_iov_md": false 00:17:42.711 }, 00:17:42.711 "memory_domains": [ 00:17:42.711 { 00:17:42.711 "dma_device_id": "system", 00:17:42.711 "dma_device_type": 1 00:17:42.711 }, 00:17:42.711 { 00:17:42.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.711 "dma_device_type": 2 00:17:42.711 }, 00:17:42.711 { 00:17:42.711 "dma_device_id": "system", 00:17:42.711 "dma_device_type": 1 00:17:42.711 }, 00:17:42.711 { 00:17:42.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.711 "dma_device_type": 2 00:17:42.711 }, 00:17:42.711 { 00:17:42.711 "dma_device_id": "system", 00:17:42.711 "dma_device_type": 1 00:17:42.711 }, 00:17:42.711 { 00:17:42.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.711 "dma_device_type": 2 00:17:42.711 } 00:17:42.711 ], 00:17:42.711 "driver_specific": { 00:17:42.711 "raid": { 00:17:42.711 "uuid": "6f464bdd-f358-417e-a185-4b72ff60245a", 00:17:42.711 "strip_size_kb": 64, 00:17:42.711 "state": "online", 00:17:42.711 "raid_level": "concat", 00:17:42.711 "superblock": true, 00:17:42.711 "num_base_bdevs": 3, 00:17:42.711 "num_base_bdevs_discovered": 3, 00:17:42.711 "num_base_bdevs_operational": 3, 00:17:42.711 "base_bdevs_list": [ 00:17:42.711 { 00:17:42.711 "name": "NewBaseBdev", 00:17:42.711 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:42.711 "is_configured": true, 00:17:42.711 "data_offset": 2048, 00:17:42.711 "data_size": 63488 00:17:42.711 }, 00:17:42.712 { 00:17:42.712 "name": "BaseBdev2", 00:17:42.712 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:42.712 "is_configured": true, 00:17:42.712 "data_offset": 2048, 00:17:42.712 "data_size": 63488 00:17:42.712 }, 00:17:42.712 { 00:17:42.712 "name": "BaseBdev3", 00:17:42.712 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:42.712 "is_configured": true, 00:17:42.712 "data_offset": 2048, 00:17:42.712 "data_size": 63488 00:17:42.712 } 00:17:42.712 ] 00:17:42.712 } 00:17:42.712 } 00:17:42.712 }' 00:17:42.712 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.712 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:42.712 BaseBdev2 00:17:42.712 BaseBdev3' 00:17:42.712 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.712 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:42.712 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.970 "name": "NewBaseBdev", 00:17:42.970 "aliases": [ 00:17:42.970 "64c2a896-10d1-46be-8e29-ec6a40464093" 00:17:42.970 ], 00:17:42.970 "product_name": "Malloc disk", 00:17:42.970 "block_size": 512, 00:17:42.970 "num_blocks": 65536, 00:17:42.970 "uuid": "64c2a896-10d1-46be-8e29-ec6a40464093", 00:17:42.970 "assigned_rate_limits": { 00:17:42.970 "rw_ios_per_sec": 0, 00:17:42.970 "rw_mbytes_per_sec": 0, 00:17:42.970 "r_mbytes_per_sec": 0, 00:17:42.970 "w_mbytes_per_sec": 0 00:17:42.970 }, 00:17:42.970 "claimed": true, 00:17:42.970 "claim_type": "exclusive_write", 00:17:42.970 "zoned": false, 00:17:42.970 "supported_io_types": { 00:17:42.970 "read": true, 00:17:42.970 "write": true, 00:17:42.970 "unmap": true, 00:17:42.970 "flush": true, 00:17:42.970 "reset": true, 00:17:42.970 "nvme_admin": false, 00:17:42.970 "nvme_io": false, 00:17:42.970 "nvme_io_md": false, 00:17:42.970 "write_zeroes": true, 00:17:42.970 "zcopy": true, 00:17:42.970 "get_zone_info": false, 00:17:42.970 "zone_management": false, 00:17:42.970 "zone_append": false, 00:17:42.970 "compare": false, 00:17:42.970 "compare_and_write": false, 00:17:42.970 "abort": true, 00:17:42.970 "seek_hole": false, 00:17:42.970 "seek_data": false, 00:17:42.970 "copy": true, 00:17:42.970 "nvme_iov_md": false 00:17:42.970 }, 00:17:42.970 "memory_domains": [ 00:17:42.970 { 00:17:42.970 "dma_device_id": "system", 00:17:42.970 "dma_device_type": 1 00:17:42.970 }, 00:17:42.970 { 00:17:42.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.970 "dma_device_type": 2 00:17:42.970 } 00:17:42.970 ], 00:17:42.970 "driver_specific": {} 00:17:42.970 }' 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.970 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.228 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.229 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.229 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.229 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.229 10:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.229 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.229 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.229 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.229 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:43.229 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.487 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.487 "name": "BaseBdev2", 00:17:43.487 "aliases": [ 00:17:43.487 "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60" 00:17:43.487 ], 00:17:43.487 "product_name": "Malloc disk", 00:17:43.487 "block_size": 512, 00:17:43.487 "num_blocks": 65536, 00:17:43.487 "uuid": "6d52c0b2-98a5-4e9a-979a-ae8c2d18df60", 00:17:43.487 "assigned_rate_limits": { 00:17:43.487 "rw_ios_per_sec": 0, 00:17:43.487 "rw_mbytes_per_sec": 0, 00:17:43.487 "r_mbytes_per_sec": 0, 00:17:43.487 "w_mbytes_per_sec": 0 00:17:43.487 }, 00:17:43.487 "claimed": true, 00:17:43.487 "claim_type": "exclusive_write", 00:17:43.487 "zoned": false, 00:17:43.487 "supported_io_types": { 00:17:43.487 "read": true, 00:17:43.487 "write": true, 00:17:43.487 "unmap": true, 00:17:43.487 "flush": true, 00:17:43.487 "reset": true, 00:17:43.487 "nvme_admin": false, 00:17:43.487 "nvme_io": false, 00:17:43.487 "nvme_io_md": false, 00:17:43.487 "write_zeroes": true, 00:17:43.487 "zcopy": true, 00:17:43.487 "get_zone_info": false, 00:17:43.487 "zone_management": false, 00:17:43.487 "zone_append": false, 00:17:43.487 "compare": false, 00:17:43.487 "compare_and_write": false, 00:17:43.487 "abort": true, 00:17:43.487 "seek_hole": false, 00:17:43.487 "seek_data": false, 00:17:43.487 "copy": true, 00:17:43.487 "nvme_iov_md": false 00:17:43.487 }, 00:17:43.487 "memory_domains": [ 00:17:43.487 { 00:17:43.487 "dma_device_id": "system", 00:17:43.487 "dma_device_type": 1 00:17:43.487 }, 00:17:43.487 { 00:17:43.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.487 "dma_device_type": 2 00:17:43.487 } 00:17:43.487 ], 00:17:43.487 "driver_specific": {} 00:17:43.487 }' 00:17:43.487 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.487 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.745 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.746 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.004 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.004 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.004 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.004 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:44.004 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.263 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.263 "name": "BaseBdev3", 00:17:44.263 "aliases": [ 00:17:44.263 "ab536998-3e2a-4115-a3f2-a66187e79e57" 00:17:44.263 ], 00:17:44.263 "product_name": "Malloc disk", 00:17:44.263 "block_size": 512, 00:17:44.263 "num_blocks": 65536, 00:17:44.263 "uuid": "ab536998-3e2a-4115-a3f2-a66187e79e57", 00:17:44.263 "assigned_rate_limits": { 00:17:44.263 "rw_ios_per_sec": 0, 00:17:44.263 "rw_mbytes_per_sec": 0, 00:17:44.263 "r_mbytes_per_sec": 0, 00:17:44.263 "w_mbytes_per_sec": 0 00:17:44.263 }, 00:17:44.263 "claimed": true, 00:17:44.263 "claim_type": "exclusive_write", 00:17:44.263 "zoned": false, 00:17:44.263 "supported_io_types": { 00:17:44.263 "read": true, 00:17:44.263 "write": true, 00:17:44.263 "unmap": true, 00:17:44.263 "flush": true, 00:17:44.263 "reset": true, 00:17:44.263 "nvme_admin": false, 00:17:44.263 "nvme_io": false, 00:17:44.263 "nvme_io_md": false, 00:17:44.263 "write_zeroes": true, 00:17:44.263 "zcopy": true, 00:17:44.263 "get_zone_info": false, 00:17:44.263 "zone_management": false, 00:17:44.263 "zone_append": false, 00:17:44.263 "compare": false, 00:17:44.263 "compare_and_write": false, 00:17:44.263 "abort": true, 00:17:44.263 "seek_hole": false, 00:17:44.263 "seek_data": false, 00:17:44.263 "copy": true, 00:17:44.263 "nvme_iov_md": false 00:17:44.263 }, 00:17:44.263 "memory_domains": [ 00:17:44.263 { 00:17:44.263 "dma_device_id": "system", 00:17:44.263 "dma_device_type": 1 00:17:44.263 }, 00:17:44.263 { 00:17:44.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.263 "dma_device_type": 2 00:17:44.263 } 00:17:44.263 ], 00:17:44.263 "driver_specific": {} 00:17:44.263 }' 00:17:44.263 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.263 10:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.263 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.521 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.521 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.521 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.521 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.779 [2024-07-26 10:27:57.446676] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.779 [2024-07-26 10:27:57.446702] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.779 [2024-07-26 10:27:57.446748] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.779 [2024-07-26 10:27:57.446794] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.779 [2024-07-26 10:27:57.446804] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xac58f0 name Existed_Raid, state offline 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3391172 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3391172 ']' 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3391172 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3391172 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3391172' 00:17:44.779 killing process with pid 3391172 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3391172 00:17:44.779 [2024-07-26 10:27:57.521424] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.779 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3391172 00:17:44.779 [2024-07-26 10:27:57.545273] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.038 10:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:45.038 00:17:45.038 real 0m26.203s 00:17:45.038 user 0m47.904s 00:17:45.038 sys 0m4.855s 00:17:45.038 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.038 10:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.038 ************************************ 00:17:45.038 END TEST raid_state_function_test_sb 00:17:45.038 ************************************ 00:17:45.038 10:27:57 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:45.038 10:27:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:45.038 10:27:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.038 10:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.038 ************************************ 00:17:45.038 START TEST raid_superblock_test 00:17:45.038 ************************************ 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3396167 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3396167 /var/tmp/spdk-raid.sock 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3396167 ']' 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.038 10:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.038 [2024-07-26 10:27:57.846829] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:45.038 [2024-07-26 10:27:57.846882] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396167 ] 00:17:45.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:45.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.039 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:45.297 [2024-07-26 10:27:57.982247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.297 [2024-07-26 10:27:58.027433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.297 [2024-07-26 10:27:58.089547] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.297 [2024-07-26 10:27:58.089582] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:45.863 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:46.120 malloc1 00:17:46.120 10:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.379 [2024-07-26 10:27:59.205682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.379 [2024-07-26 10:27:59.205726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.379 [2024-07-26 10:27:59.205744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf42270 00:17:46.379 [2024-07-26 10:27:59.205756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.379 [2024-07-26 10:27:59.207177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.379 [2024-07-26 10:27:59.207204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.379 pt1 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.379 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:46.637 malloc2 00:17:46.637 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.896 [2024-07-26 10:27:59.667291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.896 [2024-07-26 10:27:59.667331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.896 [2024-07-26 10:27:59.667347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xefe2f0 00:17:46.896 [2024-07-26 10:27:59.667358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.896 [2024-07-26 10:27:59.668796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.896 [2024-07-26 10:27:59.668822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.896 pt2 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.896 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:47.154 malloc3 00:17:47.154 10:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.412 [2024-07-26 10:28:00.128898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.412 [2024-07-26 10:28:00.128940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.412 [2024-07-26 10:28:00.128956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xec8650 00:17:47.412 [2024-07-26 10:28:00.128967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.412 [2024-07-26 10:28:00.130456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.412 [2024-07-26 10:28:00.130483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.412 pt3 00:17:47.412 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:47.412 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:47.412 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:47.670 [2024-07-26 10:28:00.353517] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.670 [2024-07-26 10:28:00.354667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.670 [2024-07-26 10:28:00.354715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.670 [2024-07-26 10:28:00.354839] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xec9d00 00:17:47.670 [2024-07-26 10:28:00.354849] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:47.671 [2024-07-26 10:28:00.355029] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xda7320 00:17:47.671 [2024-07-26 10:28:00.355157] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xec9d00 00:17:47.671 [2024-07-26 10:28:00.355170] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xec9d00 00:17:47.671 [2024-07-26 10:28:00.355271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.671 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.929 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.929 "name": "raid_bdev1", 00:17:47.929 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:47.929 "strip_size_kb": 64, 00:17:47.929 "state": "online", 00:17:47.929 "raid_level": "concat", 00:17:47.929 "superblock": true, 00:17:47.929 "num_base_bdevs": 3, 00:17:47.929 "num_base_bdevs_discovered": 3, 00:17:47.929 "num_base_bdevs_operational": 3, 00:17:47.929 "base_bdevs_list": [ 00:17:47.929 { 00:17:47.929 "name": "pt1", 00:17:47.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.929 "is_configured": true, 00:17:47.929 "data_offset": 2048, 00:17:47.929 "data_size": 63488 00:17:47.929 }, 00:17:47.929 { 00:17:47.929 "name": "pt2", 00:17:47.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.929 "is_configured": true, 00:17:47.929 "data_offset": 2048, 00:17:47.929 "data_size": 63488 00:17:47.929 }, 00:17:47.929 { 00:17:47.929 "name": "pt3", 00:17:47.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.929 "is_configured": true, 00:17:47.929 "data_offset": 2048, 00:17:47.929 "data_size": 63488 00:17:47.929 } 00:17:47.929 ] 00:17:47.929 }' 00:17:47.929 10:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.929 10:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:48.496 [2024-07-26 10:28:01.356387] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:48.496 "name": "raid_bdev1", 00:17:48.496 "aliases": [ 00:17:48.496 "2bb0d93b-dfc6-4f36-b988-154af783b1d2" 00:17:48.496 ], 00:17:48.496 "product_name": "Raid Volume", 00:17:48.496 "block_size": 512, 00:17:48.496 "num_blocks": 190464, 00:17:48.496 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:48.496 "assigned_rate_limits": { 00:17:48.496 "rw_ios_per_sec": 0, 00:17:48.496 "rw_mbytes_per_sec": 0, 00:17:48.496 "r_mbytes_per_sec": 0, 00:17:48.496 "w_mbytes_per_sec": 0 00:17:48.496 }, 00:17:48.496 "claimed": false, 00:17:48.496 "zoned": false, 00:17:48.496 "supported_io_types": { 00:17:48.496 "read": true, 00:17:48.496 "write": true, 00:17:48.496 "unmap": true, 00:17:48.496 "flush": true, 00:17:48.496 "reset": true, 00:17:48.496 "nvme_admin": false, 00:17:48.496 "nvme_io": false, 00:17:48.496 "nvme_io_md": false, 00:17:48.496 "write_zeroes": true, 00:17:48.496 "zcopy": false, 00:17:48.496 "get_zone_info": false, 00:17:48.496 "zone_management": false, 00:17:48.496 "zone_append": false, 00:17:48.496 "compare": false, 00:17:48.496 "compare_and_write": false, 00:17:48.496 "abort": false, 00:17:48.496 "seek_hole": false, 00:17:48.496 "seek_data": false, 00:17:48.496 "copy": false, 00:17:48.496 "nvme_iov_md": false 00:17:48.496 }, 00:17:48.496 "memory_domains": [ 00:17:48.496 { 00:17:48.496 "dma_device_id": "system", 00:17:48.496 "dma_device_type": 1 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.496 "dma_device_type": 2 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "dma_device_id": "system", 00:17:48.496 "dma_device_type": 1 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.496 "dma_device_type": 2 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "dma_device_id": "system", 00:17:48.496 "dma_device_type": 1 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.496 "dma_device_type": 2 00:17:48.496 } 00:17:48.496 ], 00:17:48.496 "driver_specific": { 00:17:48.496 "raid": { 00:17:48.496 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:48.496 "strip_size_kb": 64, 00:17:48.496 "state": "online", 00:17:48.496 "raid_level": "concat", 00:17:48.496 "superblock": true, 00:17:48.496 "num_base_bdevs": 3, 00:17:48.496 "num_base_bdevs_discovered": 3, 00:17:48.496 "num_base_bdevs_operational": 3, 00:17:48.496 "base_bdevs_list": [ 00:17:48.496 { 00:17:48.496 "name": "pt1", 00:17:48.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.496 "is_configured": true, 00:17:48.496 "data_offset": 2048, 00:17:48.496 "data_size": 63488 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "name": "pt2", 00:17:48.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.496 "is_configured": true, 00:17:48.496 "data_offset": 2048, 00:17:48.496 "data_size": 63488 00:17:48.496 }, 00:17:48.496 { 00:17:48.496 "name": "pt3", 00:17:48.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.496 "is_configured": true, 00:17:48.496 "data_offset": 2048, 00:17:48.496 "data_size": 63488 00:17:48.496 } 00:17:48.496 ] 00:17:48.496 } 00:17:48.496 } 00:17:48.496 }' 00:17:48.496 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.754 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:48.754 pt2 00:17:48.754 pt3' 00:17:48.755 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.755 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:48.755 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.755 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.755 "name": "pt1", 00:17:48.755 "aliases": [ 00:17:48.755 "00000000-0000-0000-0000-000000000001" 00:17:48.755 ], 00:17:48.755 "product_name": "passthru", 00:17:48.755 "block_size": 512, 00:17:48.755 "num_blocks": 65536, 00:17:48.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.755 "assigned_rate_limits": { 00:17:48.755 "rw_ios_per_sec": 0, 00:17:48.755 "rw_mbytes_per_sec": 0, 00:17:48.755 "r_mbytes_per_sec": 0, 00:17:48.755 "w_mbytes_per_sec": 0 00:17:48.755 }, 00:17:48.755 "claimed": true, 00:17:48.755 "claim_type": "exclusive_write", 00:17:48.755 "zoned": false, 00:17:48.755 "supported_io_types": { 00:17:48.755 "read": true, 00:17:48.755 "write": true, 00:17:48.755 "unmap": true, 00:17:48.755 "flush": true, 00:17:48.755 "reset": true, 00:17:48.755 "nvme_admin": false, 00:17:48.755 "nvme_io": false, 00:17:48.755 "nvme_io_md": false, 00:17:48.755 "write_zeroes": true, 00:17:48.755 "zcopy": true, 00:17:48.755 "get_zone_info": false, 00:17:48.755 "zone_management": false, 00:17:48.755 "zone_append": false, 00:17:48.755 "compare": false, 00:17:48.755 "compare_and_write": false, 00:17:48.755 "abort": true, 00:17:48.755 "seek_hole": false, 00:17:48.755 "seek_data": false, 00:17:48.755 "copy": true, 00:17:48.755 "nvme_iov_md": false 00:17:48.755 }, 00:17:48.755 "memory_domains": [ 00:17:48.755 { 00:17:48.755 "dma_device_id": "system", 00:17:48.755 "dma_device_type": 1 00:17:48.755 }, 00:17:48.755 { 00:17:48.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.755 "dma_device_type": 2 00:17:48.755 } 00:17:48.755 ], 00:17:48.755 "driver_specific": { 00:17:48.755 "passthru": { 00:17:48.755 "name": "pt1", 00:17:48.755 "base_bdev_name": "malloc1" 00:17:48.755 } 00:17:48.755 } 00:17:48.755 }' 00:17:48.755 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.013 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.272 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.272 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.272 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.272 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:49.272 10:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.531 "name": "pt2", 00:17:49.531 "aliases": [ 00:17:49.531 "00000000-0000-0000-0000-000000000002" 00:17:49.531 ], 00:17:49.531 "product_name": "passthru", 00:17:49.531 "block_size": 512, 00:17:49.531 "num_blocks": 65536, 00:17:49.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.531 "assigned_rate_limits": { 00:17:49.531 "rw_ios_per_sec": 0, 00:17:49.531 "rw_mbytes_per_sec": 0, 00:17:49.531 "r_mbytes_per_sec": 0, 00:17:49.531 "w_mbytes_per_sec": 0 00:17:49.531 }, 00:17:49.531 "claimed": true, 00:17:49.531 "claim_type": "exclusive_write", 00:17:49.531 "zoned": false, 00:17:49.531 "supported_io_types": { 00:17:49.531 "read": true, 00:17:49.531 "write": true, 00:17:49.531 "unmap": true, 00:17:49.531 "flush": true, 00:17:49.531 "reset": true, 00:17:49.531 "nvme_admin": false, 00:17:49.531 "nvme_io": false, 00:17:49.531 "nvme_io_md": false, 00:17:49.531 "write_zeroes": true, 00:17:49.531 "zcopy": true, 00:17:49.531 "get_zone_info": false, 00:17:49.531 "zone_management": false, 00:17:49.531 "zone_append": false, 00:17:49.531 "compare": false, 00:17:49.531 "compare_and_write": false, 00:17:49.531 "abort": true, 00:17:49.531 "seek_hole": false, 00:17:49.531 "seek_data": false, 00:17:49.531 "copy": true, 00:17:49.531 "nvme_iov_md": false 00:17:49.531 }, 00:17:49.531 "memory_domains": [ 00:17:49.531 { 00:17:49.531 "dma_device_id": "system", 00:17:49.531 "dma_device_type": 1 00:17:49.531 }, 00:17:49.531 { 00:17:49.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.531 "dma_device_type": 2 00:17:49.531 } 00:17:49.531 ], 00:17:49.531 "driver_specific": { 00:17:49.531 "passthru": { 00:17:49.531 "name": "pt2", 00:17:49.531 "base_bdev_name": "malloc2" 00:17:49.531 } 00:17:49.531 } 00:17:49.531 }' 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.531 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:49.790 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:50.049 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.049 "name": "pt3", 00:17:50.049 "aliases": [ 00:17:50.049 "00000000-0000-0000-0000-000000000003" 00:17:50.049 ], 00:17:50.049 "product_name": "passthru", 00:17:50.049 "block_size": 512, 00:17:50.049 "num_blocks": 65536, 00:17:50.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.049 "assigned_rate_limits": { 00:17:50.049 "rw_ios_per_sec": 0, 00:17:50.049 "rw_mbytes_per_sec": 0, 00:17:50.049 "r_mbytes_per_sec": 0, 00:17:50.049 "w_mbytes_per_sec": 0 00:17:50.049 }, 00:17:50.049 "claimed": true, 00:17:50.049 "claim_type": "exclusive_write", 00:17:50.049 "zoned": false, 00:17:50.049 "supported_io_types": { 00:17:50.049 "read": true, 00:17:50.049 "write": true, 00:17:50.049 "unmap": true, 00:17:50.049 "flush": true, 00:17:50.049 "reset": true, 00:17:50.049 "nvme_admin": false, 00:17:50.049 "nvme_io": false, 00:17:50.049 "nvme_io_md": false, 00:17:50.049 "write_zeroes": true, 00:17:50.049 "zcopy": true, 00:17:50.049 "get_zone_info": false, 00:17:50.049 "zone_management": false, 00:17:50.049 "zone_append": false, 00:17:50.049 "compare": false, 00:17:50.049 "compare_and_write": false, 00:17:50.049 "abort": true, 00:17:50.049 "seek_hole": false, 00:17:50.049 "seek_data": false, 00:17:50.049 "copy": true, 00:17:50.049 "nvme_iov_md": false 00:17:50.049 }, 00:17:50.049 "memory_domains": [ 00:17:50.049 { 00:17:50.049 "dma_device_id": "system", 00:17:50.049 "dma_device_type": 1 00:17:50.049 }, 00:17:50.049 { 00:17:50.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.049 "dma_device_type": 2 00:17:50.049 } 00:17:50.049 ], 00:17:50.049 "driver_specific": { 00:17:50.049 "passthru": { 00:17:50.050 "name": "pt3", 00:17:50.050 "base_bdev_name": "malloc3" 00:17:50.050 } 00:17:50.050 } 00:17:50.050 }' 00:17:50.050 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.050 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.050 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:50.050 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.050 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.309 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.309 10:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:50.309 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:50.568 [2024-07-26 10:28:03.345621] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.568 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=2bb0d93b-dfc6-4f36-b988-154af783b1d2 00:17:50.568 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 2bb0d93b-dfc6-4f36-b988-154af783b1d2 ']' 00:17:50.568 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.827 [2024-07-26 10:28:03.573962] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.827 [2024-07-26 10:28:03.573977] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.827 [2024-07-26 10:28:03.574020] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.827 [2024-07-26 10:28:03.574069] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.827 [2024-07-26 10:28:03.574080] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec9d00 name raid_bdev1, state offline 00:17:50.827 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:50.827 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.095 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:51.095 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:51.095 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.095 10:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:51.369 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.369 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.628 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.628 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:51.628 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:51.628 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:51.887 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:52.146 [2024-07-26 10:28:04.953547] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.146 [2024-07-26 10:28:04.954778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.146 [2024-07-26 10:28:04.954816] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:52.146 [2024-07-26 10:28:04.954860] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:52.146 [2024-07-26 10:28:04.954895] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:52.146 [2024-07-26 10:28:04.954916] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:52.146 [2024-07-26 10:28:04.954933] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.146 [2024-07-26 10:28:04.954942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeca8d0 name raid_bdev1, state configuring 00:17:52.146 request: 00:17:52.146 { 00:17:52.146 "name": "raid_bdev1", 00:17:52.146 "raid_level": "concat", 00:17:52.146 "base_bdevs": [ 00:17:52.146 "malloc1", 00:17:52.146 "malloc2", 00:17:52.146 "malloc3" 00:17:52.146 ], 00:17:52.146 "strip_size_kb": 64, 00:17:52.146 "superblock": false, 00:17:52.146 "method": "bdev_raid_create", 00:17:52.146 "req_id": 1 00:17:52.146 } 00:17:52.146 Got JSON-RPC error response 00:17:52.146 response: 00:17:52.146 { 00:17:52.146 "code": -17, 00:17:52.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.146 } 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.146 10:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:52.405 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:52.405 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:52.405 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.665 [2024-07-26 10:28:05.398650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.665 [2024-07-26 10:28:05.398683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.665 [2024-07-26 10:28:05.398699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf425b0 00:17:52.665 [2024-07-26 10:28:05.398711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.665 [2024-07-26 10:28:05.400111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.665 [2024-07-26 10:28:05.400146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.665 [2024-07-26 10:28:05.400203] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.665 [2024-07-26 10:28:05.400226] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.665 pt1 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.665 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.924 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.924 "name": "raid_bdev1", 00:17:52.924 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:52.924 "strip_size_kb": 64, 00:17:52.924 "state": "configuring", 00:17:52.924 "raid_level": "concat", 00:17:52.924 "superblock": true, 00:17:52.924 "num_base_bdevs": 3, 00:17:52.924 "num_base_bdevs_discovered": 1, 00:17:52.924 "num_base_bdevs_operational": 3, 00:17:52.924 "base_bdevs_list": [ 00:17:52.924 { 00:17:52.924 "name": "pt1", 00:17:52.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.924 "is_configured": true, 00:17:52.924 "data_offset": 2048, 00:17:52.924 "data_size": 63488 00:17:52.924 }, 00:17:52.924 { 00:17:52.924 "name": null, 00:17:52.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.924 "is_configured": false, 00:17:52.924 "data_offset": 2048, 00:17:52.924 "data_size": 63488 00:17:52.924 }, 00:17:52.924 { 00:17:52.924 "name": null, 00:17:52.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.924 "is_configured": false, 00:17:52.924 "data_offset": 2048, 00:17:52.924 "data_size": 63488 00:17:52.924 } 00:17:52.924 ] 00:17:52.924 }' 00:17:52.924 10:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.924 10:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.493 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:17:53.493 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.493 [2024-07-26 10:28:06.361213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.493 [2024-07-26 10:28:06.361258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.493 [2024-07-26 10:28:06.361278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xefda00 00:17:53.493 [2024-07-26 10:28:06.361289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.493 [2024-07-26 10:28:06.361593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.493 [2024-07-26 10:28:06.361609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.493 [2024-07-26 10:28:06.361662] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.493 [2024-07-26 10:28:06.361679] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.493 pt2 00:17:53.493 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:53.752 [2024-07-26 10:28:06.593836] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.752 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.011 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.011 "name": "raid_bdev1", 00:17:54.011 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:54.011 "strip_size_kb": 64, 00:17:54.011 "state": "configuring", 00:17:54.011 "raid_level": "concat", 00:17:54.011 "superblock": true, 00:17:54.011 "num_base_bdevs": 3, 00:17:54.011 "num_base_bdevs_discovered": 1, 00:17:54.011 "num_base_bdevs_operational": 3, 00:17:54.011 "base_bdevs_list": [ 00:17:54.011 { 00:17:54.012 "name": "pt1", 00:17:54.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.012 "is_configured": true, 00:17:54.012 "data_offset": 2048, 00:17:54.012 "data_size": 63488 00:17:54.012 }, 00:17:54.012 { 00:17:54.012 "name": null, 00:17:54.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.012 "is_configured": false, 00:17:54.012 "data_offset": 2048, 00:17:54.012 "data_size": 63488 00:17:54.012 }, 00:17:54.012 { 00:17:54.012 "name": null, 00:17:54.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.012 "is_configured": false, 00:17:54.012 "data_offset": 2048, 00:17:54.012 "data_size": 63488 00:17:54.012 } 00:17:54.012 ] 00:17:54.012 }' 00:17:54.012 10:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.012 10:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.579 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:54.579 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:54.579 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.838 [2024-07-26 10:28:07.584442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.838 [2024-07-26 10:28:07.584489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.838 [2024-07-26 10:28:07.584507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeca8d0 00:17:54.838 [2024-07-26 10:28:07.584519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.838 [2024-07-26 10:28:07.584832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.838 [2024-07-26 10:28:07.584848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.838 [2024-07-26 10:28:07.584906] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:54.838 [2024-07-26 10:28:07.584923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.838 pt2 00:17:54.838 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:54.838 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:54.838 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:55.098 [2024-07-26 10:28:07.813043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:55.098 [2024-07-26 10:28:07.813085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.098 [2024-07-26 10:28:07.813104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecd530 00:17:55.098 [2024-07-26 10:28:07.813116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.098 [2024-07-26 10:28:07.813420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.098 [2024-07-26 10:28:07.813437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:55.098 [2024-07-26 10:28:07.813493] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:55.098 [2024-07-26 10:28:07.813509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:55.098 [2024-07-26 10:28:07.813606] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xecafa0 00:17:55.098 [2024-07-26 10:28:07.813615] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.098 [2024-07-26 10:28:07.813761] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xefdc90 00:17:55.098 [2024-07-26 10:28:07.813869] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xecafa0 00:17:55.098 [2024-07-26 10:28:07.813878] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xecafa0 00:17:55.098 [2024-07-26 10:28:07.813962] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.098 pt3 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.098 10:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.357 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.357 "name": "raid_bdev1", 00:17:55.357 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:55.357 "strip_size_kb": 64, 00:17:55.357 "state": "online", 00:17:55.357 "raid_level": "concat", 00:17:55.357 "superblock": true, 00:17:55.357 "num_base_bdevs": 3, 00:17:55.357 "num_base_bdevs_discovered": 3, 00:17:55.357 "num_base_bdevs_operational": 3, 00:17:55.357 "base_bdevs_list": [ 00:17:55.357 { 00:17:55.357 "name": "pt1", 00:17:55.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.357 "is_configured": true, 00:17:55.357 "data_offset": 2048, 00:17:55.357 "data_size": 63488 00:17:55.357 }, 00:17:55.357 { 00:17:55.357 "name": "pt2", 00:17:55.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.357 "is_configured": true, 00:17:55.357 "data_offset": 2048, 00:17:55.357 "data_size": 63488 00:17:55.357 }, 00:17:55.357 { 00:17:55.357 "name": "pt3", 00:17:55.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.357 "is_configured": true, 00:17:55.357 "data_offset": 2048, 00:17:55.357 "data_size": 63488 00:17:55.357 } 00:17:55.357 ] 00:17:55.358 }' 00:17:55.358 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.358 10:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.926 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:55.926 [2024-07-26 10:28:08.815939] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.185 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:56.185 "name": "raid_bdev1", 00:17:56.185 "aliases": [ 00:17:56.185 "2bb0d93b-dfc6-4f36-b988-154af783b1d2" 00:17:56.185 ], 00:17:56.185 "product_name": "Raid Volume", 00:17:56.185 "block_size": 512, 00:17:56.185 "num_blocks": 190464, 00:17:56.185 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:56.185 "assigned_rate_limits": { 00:17:56.185 "rw_ios_per_sec": 0, 00:17:56.185 "rw_mbytes_per_sec": 0, 00:17:56.185 "r_mbytes_per_sec": 0, 00:17:56.185 "w_mbytes_per_sec": 0 00:17:56.185 }, 00:17:56.186 "claimed": false, 00:17:56.186 "zoned": false, 00:17:56.186 "supported_io_types": { 00:17:56.186 "read": true, 00:17:56.186 "write": true, 00:17:56.186 "unmap": true, 00:17:56.186 "flush": true, 00:17:56.186 "reset": true, 00:17:56.186 "nvme_admin": false, 00:17:56.186 "nvme_io": false, 00:17:56.186 "nvme_io_md": false, 00:17:56.186 "write_zeroes": true, 00:17:56.186 "zcopy": false, 00:17:56.186 "get_zone_info": false, 00:17:56.186 "zone_management": false, 00:17:56.186 "zone_append": false, 00:17:56.186 "compare": false, 00:17:56.186 "compare_and_write": false, 00:17:56.186 "abort": false, 00:17:56.186 "seek_hole": false, 00:17:56.186 "seek_data": false, 00:17:56.186 "copy": false, 00:17:56.186 "nvme_iov_md": false 00:17:56.186 }, 00:17:56.186 "memory_domains": [ 00:17:56.186 { 00:17:56.186 "dma_device_id": "system", 00:17:56.186 "dma_device_type": 1 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.186 "dma_device_type": 2 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "dma_device_id": "system", 00:17:56.186 "dma_device_type": 1 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.186 "dma_device_type": 2 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "dma_device_id": "system", 00:17:56.186 "dma_device_type": 1 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.186 "dma_device_type": 2 00:17:56.186 } 00:17:56.186 ], 00:17:56.186 "driver_specific": { 00:17:56.186 "raid": { 00:17:56.186 "uuid": "2bb0d93b-dfc6-4f36-b988-154af783b1d2", 00:17:56.186 "strip_size_kb": 64, 00:17:56.186 "state": "online", 00:17:56.186 "raid_level": "concat", 00:17:56.186 "superblock": true, 00:17:56.186 "num_base_bdevs": 3, 00:17:56.186 "num_base_bdevs_discovered": 3, 00:17:56.186 "num_base_bdevs_operational": 3, 00:17:56.186 "base_bdevs_list": [ 00:17:56.186 { 00:17:56.186 "name": "pt1", 00:17:56.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.186 "is_configured": true, 00:17:56.186 "data_offset": 2048, 00:17:56.186 "data_size": 63488 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "name": "pt2", 00:17:56.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.186 "is_configured": true, 00:17:56.186 "data_offset": 2048, 00:17:56.186 "data_size": 63488 00:17:56.186 }, 00:17:56.186 { 00:17:56.186 "name": "pt3", 00:17:56.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.186 "is_configured": true, 00:17:56.186 "data_offset": 2048, 00:17:56.186 "data_size": 63488 00:17:56.186 } 00:17:56.186 ] 00:17:56.186 } 00:17:56.186 } 00:17:56.186 }' 00:17:56.186 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.186 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:56.186 pt2 00:17:56.186 pt3' 00:17:56.186 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.186 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:56.186 10:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.444 "name": "pt1", 00:17:56.444 "aliases": [ 00:17:56.444 "00000000-0000-0000-0000-000000000001" 00:17:56.444 ], 00:17:56.444 "product_name": "passthru", 00:17:56.444 "block_size": 512, 00:17:56.444 "num_blocks": 65536, 00:17:56.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.444 "assigned_rate_limits": { 00:17:56.444 "rw_ios_per_sec": 0, 00:17:56.444 "rw_mbytes_per_sec": 0, 00:17:56.444 "r_mbytes_per_sec": 0, 00:17:56.444 "w_mbytes_per_sec": 0 00:17:56.444 }, 00:17:56.444 "claimed": true, 00:17:56.444 "claim_type": "exclusive_write", 00:17:56.444 "zoned": false, 00:17:56.444 "supported_io_types": { 00:17:56.444 "read": true, 00:17:56.444 "write": true, 00:17:56.444 "unmap": true, 00:17:56.444 "flush": true, 00:17:56.444 "reset": true, 00:17:56.444 "nvme_admin": false, 00:17:56.444 "nvme_io": false, 00:17:56.444 "nvme_io_md": false, 00:17:56.444 "write_zeroes": true, 00:17:56.444 "zcopy": true, 00:17:56.444 "get_zone_info": false, 00:17:56.444 "zone_management": false, 00:17:56.444 "zone_append": false, 00:17:56.444 "compare": false, 00:17:56.444 "compare_and_write": false, 00:17:56.444 "abort": true, 00:17:56.444 "seek_hole": false, 00:17:56.444 "seek_data": false, 00:17:56.444 "copy": true, 00:17:56.444 "nvme_iov_md": false 00:17:56.444 }, 00:17:56.444 "memory_domains": [ 00:17:56.444 { 00:17:56.444 "dma_device_id": "system", 00:17:56.444 "dma_device_type": 1 00:17:56.444 }, 00:17:56.444 { 00:17:56.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.444 "dma_device_type": 2 00:17:56.444 } 00:17:56.444 ], 00:17:56.444 "driver_specific": { 00:17:56.444 "passthru": { 00:17:56.444 "name": "pt1", 00:17:56.444 "base_bdev_name": "malloc1" 00:17:56.444 } 00:17:56.444 } 00:17:56.444 }' 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.444 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.445 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.445 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:56.704 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.963 "name": "pt2", 00:17:56.963 "aliases": [ 00:17:56.963 "00000000-0000-0000-0000-000000000002" 00:17:56.963 ], 00:17:56.963 "product_name": "passthru", 00:17:56.963 "block_size": 512, 00:17:56.963 "num_blocks": 65536, 00:17:56.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.963 "assigned_rate_limits": { 00:17:56.963 "rw_ios_per_sec": 0, 00:17:56.963 "rw_mbytes_per_sec": 0, 00:17:56.963 "r_mbytes_per_sec": 0, 00:17:56.963 "w_mbytes_per_sec": 0 00:17:56.963 }, 00:17:56.963 "claimed": true, 00:17:56.963 "claim_type": "exclusive_write", 00:17:56.963 "zoned": false, 00:17:56.963 "supported_io_types": { 00:17:56.963 "read": true, 00:17:56.963 "write": true, 00:17:56.963 "unmap": true, 00:17:56.963 "flush": true, 00:17:56.963 "reset": true, 00:17:56.963 "nvme_admin": false, 00:17:56.963 "nvme_io": false, 00:17:56.963 "nvme_io_md": false, 00:17:56.963 "write_zeroes": true, 00:17:56.963 "zcopy": true, 00:17:56.963 "get_zone_info": false, 00:17:56.963 "zone_management": false, 00:17:56.963 "zone_append": false, 00:17:56.963 "compare": false, 00:17:56.963 "compare_and_write": false, 00:17:56.963 "abort": true, 00:17:56.963 "seek_hole": false, 00:17:56.963 "seek_data": false, 00:17:56.963 "copy": true, 00:17:56.963 "nvme_iov_md": false 00:17:56.963 }, 00:17:56.963 "memory_domains": [ 00:17:56.963 { 00:17:56.963 "dma_device_id": "system", 00:17:56.963 "dma_device_type": 1 00:17:56.963 }, 00:17:56.963 { 00:17:56.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.963 "dma_device_type": 2 00:17:56.963 } 00:17:56.963 ], 00:17:56.963 "driver_specific": { 00:17:56.963 "passthru": { 00:17:56.963 "name": "pt2", 00:17:56.963 "base_bdev_name": "malloc2" 00:17:56.963 } 00:17:56.963 } 00:17:56.963 }' 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.963 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.223 10:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.223 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.223 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:57.223 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:57.223 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.482 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.482 "name": "pt3", 00:17:57.482 "aliases": [ 00:17:57.482 "00000000-0000-0000-0000-000000000003" 00:17:57.482 ], 00:17:57.482 "product_name": "passthru", 00:17:57.482 "block_size": 512, 00:17:57.482 "num_blocks": 65536, 00:17:57.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.482 "assigned_rate_limits": { 00:17:57.482 "rw_ios_per_sec": 0, 00:17:57.482 "rw_mbytes_per_sec": 0, 00:17:57.482 "r_mbytes_per_sec": 0, 00:17:57.482 "w_mbytes_per_sec": 0 00:17:57.482 }, 00:17:57.482 "claimed": true, 00:17:57.482 "claim_type": "exclusive_write", 00:17:57.482 "zoned": false, 00:17:57.482 "supported_io_types": { 00:17:57.482 "read": true, 00:17:57.482 "write": true, 00:17:57.482 "unmap": true, 00:17:57.482 "flush": true, 00:17:57.482 "reset": true, 00:17:57.482 "nvme_admin": false, 00:17:57.482 "nvme_io": false, 00:17:57.482 "nvme_io_md": false, 00:17:57.482 "write_zeroes": true, 00:17:57.482 "zcopy": true, 00:17:57.482 "get_zone_info": false, 00:17:57.482 "zone_management": false, 00:17:57.482 "zone_append": false, 00:17:57.482 "compare": false, 00:17:57.482 "compare_and_write": false, 00:17:57.482 "abort": true, 00:17:57.482 "seek_hole": false, 00:17:57.482 "seek_data": false, 00:17:57.482 "copy": true, 00:17:57.482 "nvme_iov_md": false 00:17:57.482 }, 00:17:57.482 "memory_domains": [ 00:17:57.482 { 00:17:57.482 "dma_device_id": "system", 00:17:57.482 "dma_device_type": 1 00:17:57.482 }, 00:17:57.482 { 00:17:57.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.482 "dma_device_type": 2 00:17:57.482 } 00:17:57.482 ], 00:17:57.482 "driver_specific": { 00:17:57.483 "passthru": { 00:17:57.483 "name": "pt3", 00:17:57.483 "base_bdev_name": "malloc3" 00:17:57.483 } 00:17:57.483 } 00:17:57.483 }' 00:17:57.483 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.483 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.483 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.483 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.483 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.742 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:58.005 [2024-07-26 10:28:10.769076] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.005 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 2bb0d93b-dfc6-4f36-b988-154af783b1d2 '!=' 2bb0d93b-dfc6-4f36-b988-154af783b1d2 ']' 00:17:58.005 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3396167 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3396167 ']' 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3396167 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3396167 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3396167' 00:17:58.006 killing process with pid 3396167 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3396167 00:17:58.006 [2024-07-26 10:28:10.849206] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.006 [2024-07-26 10:28:10.849254] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.006 [2024-07-26 10:28:10.849304] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.006 [2024-07-26 10:28:10.849315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xecafa0 name raid_bdev1, state offline 00:17:58.006 10:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3396167 00:17:58.006 [2024-07-26 10:28:10.872904] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.266 10:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:58.266 00:17:58.266 real 0m13.249s 00:17:58.266 user 0m23.826s 00:17:58.266 sys 0m2.461s 00:17:58.266 10:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.266 10:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.266 ************************************ 00:17:58.266 END TEST raid_superblock_test 00:17:58.266 ************************************ 00:17:58.266 10:28:11 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:17:58.266 10:28:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:58.266 10:28:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.266 10:28:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.266 ************************************ 00:17:58.266 START TEST raid_read_error_test 00:17:58.266 ************************************ 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.eNcXOkpe0k 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3398698 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3398698 /var/tmp/spdk-raid.sock 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3398698 ']' 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:58.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.266 10:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.525 [2024-07-26 10:28:11.203216] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:17:58.525 [2024-07-26 10:28:11.203288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398698 ] 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:58.525 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:58.525 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:58.525 [2024-07-26 10:28:11.335603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.525 [2024-07-26 10:28:11.380523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.784 [2024-07-26 10:28:11.438024] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.784 [2024-07-26 10:28:11.438054] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.352 10:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.352 10:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:59.352 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:59.352 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:59.612 BaseBdev1_malloc 00:17:59.612 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:59.871 true 00:17:59.871 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:59.871 [2024-07-26 10:28:12.772979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:59.871 [2024-07-26 10:28:12.773020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.871 [2024-07-26 10:28:12.773039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a77c0 00:17:59.871 [2024-07-26 10:28:12.773050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.131 [2024-07-26 10:28:12.774632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.131 [2024-07-26 10:28:12.774660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:00.131 BaseBdev1 00:18:00.131 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:00.131 10:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:00.131 BaseBdev2_malloc 00:18:00.131 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:00.391 true 00:18:00.391 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:00.649 [2024-07-26 10:28:13.447149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:00.649 [2024-07-26 10:28:13.447189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.649 [2024-07-26 10:28:13.447210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204e960 00:18:00.649 [2024-07-26 10:28:13.447222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.649 [2024-07-26 10:28:13.448588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.649 [2024-07-26 10:28:13.448626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:00.649 BaseBdev2 00:18:00.649 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:00.649 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:00.907 BaseBdev3_malloc 00:18:00.907 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:01.166 true 00:18:01.166 10:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:01.425 [2024-07-26 10:28:14.133386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:01.425 [2024-07-26 10:28:14.133426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.425 [2024-07-26 10:28:14.133445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2051720 00:18:01.425 [2024-07-26 10:28:14.133456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.425 [2024-07-26 10:28:14.134809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.425 [2024-07-26 10:28:14.134836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:01.425 BaseBdev3 00:18:01.425 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:01.685 [2024-07-26 10:28:14.354001] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.685 [2024-07-26 10:28:14.355164] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.685 [2024-07-26 10:28:14.355226] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.685 [2024-07-26 10:28:14.355396] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20505b0 00:18:01.685 [2024-07-26 10:28:14.355406] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.685 [2024-07-26 10:28:14.355581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2055d60 00:18:01.685 [2024-07-26 10:28:14.355707] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20505b0 00:18:01.685 [2024-07-26 10:28:14.355716] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20505b0 00:18:01.685 [2024-07-26 10:28:14.355820] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.685 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.945 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.945 "name": "raid_bdev1", 00:18:01.945 "uuid": "670beeb9-f9e5-4d1f-863d-bb164cb1dd98", 00:18:01.945 "strip_size_kb": 64, 00:18:01.945 "state": "online", 00:18:01.945 "raid_level": "concat", 00:18:01.945 "superblock": true, 00:18:01.945 "num_base_bdevs": 3, 00:18:01.945 "num_base_bdevs_discovered": 3, 00:18:01.945 "num_base_bdevs_operational": 3, 00:18:01.945 "base_bdevs_list": [ 00:18:01.945 { 00:18:01.945 "name": "BaseBdev1", 00:18:01.945 "uuid": "93fa832a-e964-54a5-9524-b4a39fbefd21", 00:18:01.945 "is_configured": true, 00:18:01.945 "data_offset": 2048, 00:18:01.945 "data_size": 63488 00:18:01.945 }, 00:18:01.945 { 00:18:01.945 "name": "BaseBdev2", 00:18:01.945 "uuid": "c289f3cb-d299-5cf5-9610-c47e1233a793", 00:18:01.945 "is_configured": true, 00:18:01.945 "data_offset": 2048, 00:18:01.945 "data_size": 63488 00:18:01.945 }, 00:18:01.945 { 00:18:01.945 "name": "BaseBdev3", 00:18:01.945 "uuid": "f54d5294-86bd-5bdb-9176-1259048268c8", 00:18:01.945 "is_configured": true, 00:18:01.945 "data_offset": 2048, 00:18:01.945 "data_size": 63488 00:18:01.945 } 00:18:01.945 ] 00:18:01.945 }' 00:18:01.945 10:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.945 10:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.513 10:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:02.513 10:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:02.513 [2024-07-26 10:28:15.240568] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2052d20 00:18:03.452 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.711 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.992 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.992 "name": "raid_bdev1", 00:18:03.992 "uuid": "670beeb9-f9e5-4d1f-863d-bb164cb1dd98", 00:18:03.992 "strip_size_kb": 64, 00:18:03.992 "state": "online", 00:18:03.992 "raid_level": "concat", 00:18:03.992 "superblock": true, 00:18:03.992 "num_base_bdevs": 3, 00:18:03.992 "num_base_bdevs_discovered": 3, 00:18:03.992 "num_base_bdevs_operational": 3, 00:18:03.992 "base_bdevs_list": [ 00:18:03.992 { 00:18:03.992 "name": "BaseBdev1", 00:18:03.992 "uuid": "93fa832a-e964-54a5-9524-b4a39fbefd21", 00:18:03.992 "is_configured": true, 00:18:03.992 "data_offset": 2048, 00:18:03.992 "data_size": 63488 00:18:03.992 }, 00:18:03.992 { 00:18:03.992 "name": "BaseBdev2", 00:18:03.992 "uuid": "c289f3cb-d299-5cf5-9610-c47e1233a793", 00:18:03.992 "is_configured": true, 00:18:03.992 "data_offset": 2048, 00:18:03.992 "data_size": 63488 00:18:03.992 }, 00:18:03.992 { 00:18:03.992 "name": "BaseBdev3", 00:18:03.992 "uuid": "f54d5294-86bd-5bdb-9176-1259048268c8", 00:18:03.992 "is_configured": true, 00:18:03.992 "data_offset": 2048, 00:18:03.992 "data_size": 63488 00:18:03.992 } 00:18:03.992 ] 00:18:03.992 }' 00:18:03.992 10:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.992 10:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.289 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:04.548 [2024-07-26 10:28:17.399273] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.548 [2024-07-26 10:28:17.399305] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.548 [2024-07-26 10:28:17.402292] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.548 [2024-07-26 10:28:17.402344] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.548 [2024-07-26 10:28:17.402374] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.548 [2024-07-26 10:28:17.402384] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20505b0 name raid_bdev1, state offline 00:18:04.548 0 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3398698 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3398698 ']' 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3398698 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.548 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3398698 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3398698' 00:18:04.807 killing process with pid 3398698 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3398698 00:18:04.807 [2024-07-26 10:28:17.474748] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3398698 00:18:04.807 [2024-07-26 10:28:17.493361] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.eNcXOkpe0k 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:18:04.807 00:18:04.807 real 0m6.559s 00:18:04.807 user 0m10.298s 00:18:04.807 sys 0m1.184s 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.807 10:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.807 ************************************ 00:18:04.807 END TEST raid_read_error_test 00:18:04.807 ************************************ 00:18:05.067 10:28:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:18:05.067 10:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:05.067 10:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.067 10:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.067 ************************************ 00:18:05.067 START TEST raid_write_error_test 00:18:05.067 ************************************ 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.RKTNsP31AJ 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3399864 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3399864 /var/tmp/spdk-raid.sock 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3399864 ']' 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:05.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.067 10:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.067 [2024-07-26 10:28:17.847697] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:18:05.067 [2024-07-26 10:28:17.847754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399864 ] 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:05.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:05.067 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:05.326 [2024-07-26 10:28:17.984347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.326 [2024-07-26 10:28:18.028472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.326 [2024-07-26 10:28:18.082819] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.326 [2024-07-26 10:28:18.082847] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.892 10:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.892 10:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:05.892 10:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:05.892 10:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:06.151 BaseBdev1_malloc 00:18:06.151 10:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:06.410 true 00:18:06.410 10:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:06.670 [2024-07-26 10:28:19.421267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:06.670 [2024-07-26 10:28:19.421312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.670 [2024-07-26 10:28:19.421329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb437c0 00:18:06.670 [2024-07-26 10:28:19.421340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.670 [2024-07-26 10:28:19.422868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.670 [2024-07-26 10:28:19.422897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.670 BaseBdev1 00:18:06.670 10:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:06.670 10:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:06.929 BaseBdev2_malloc 00:18:06.929 10:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:07.189 true 00:18:07.189 10:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:07.448 [2024-07-26 10:28:20.139513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:07.448 [2024-07-26 10:28:20.139560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.448 [2024-07-26 10:28:20.139578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaea960 00:18:07.448 [2024-07-26 10:28:20.139590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.448 [2024-07-26 10:28:20.140990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.448 [2024-07-26 10:28:20.141019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.448 BaseBdev2 00:18:07.448 10:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:07.448 10:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:07.708 BaseBdev3_malloc 00:18:07.708 10:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:07.708 true 00:18:07.970 10:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:07.970 [2024-07-26 10:28:20.829480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:07.970 [2024-07-26 10:28:20.829522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.970 [2024-07-26 10:28:20.829539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaed720 00:18:07.970 [2024-07-26 10:28:20.829551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.970 [2024-07-26 10:28:20.830798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.970 [2024-07-26 10:28:20.830825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:07.970 BaseBdev3 00:18:07.970 10:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:08.229 [2024-07-26 10:28:21.058111] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.229 [2024-07-26 10:28:21.059180] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.229 [2024-07-26 10:28:21.059244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.229 [2024-07-26 10:28:21.059412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xaec5b0 00:18:08.229 [2024-07-26 10:28:21.059422] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:08.229 [2024-07-26 10:28:21.059583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaf1d60 00:18:08.229 [2024-07-26 10:28:21.059704] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaec5b0 00:18:08.229 [2024-07-26 10:28:21.059713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaec5b0 00:18:08.229 [2024-07-26 10:28:21.059813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.229 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.489 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.489 "name": "raid_bdev1", 00:18:08.489 "uuid": "eab849f3-eea5-482e-98eb-64eba05a8766", 00:18:08.489 "strip_size_kb": 64, 00:18:08.489 "state": "online", 00:18:08.489 "raid_level": "concat", 00:18:08.489 "superblock": true, 00:18:08.489 "num_base_bdevs": 3, 00:18:08.489 "num_base_bdevs_discovered": 3, 00:18:08.489 "num_base_bdevs_operational": 3, 00:18:08.489 "base_bdevs_list": [ 00:18:08.489 { 00:18:08.489 "name": "BaseBdev1", 00:18:08.489 "uuid": "c355b6fa-77aa-5927-a3f9-f16e8f1dc458", 00:18:08.489 "is_configured": true, 00:18:08.489 "data_offset": 2048, 00:18:08.489 "data_size": 63488 00:18:08.489 }, 00:18:08.489 { 00:18:08.489 "name": "BaseBdev2", 00:18:08.489 "uuid": "cad0d3f0-77ab-556f-ae42-beb150dd91fb", 00:18:08.489 "is_configured": true, 00:18:08.489 "data_offset": 2048, 00:18:08.489 "data_size": 63488 00:18:08.489 }, 00:18:08.489 { 00:18:08.489 "name": "BaseBdev3", 00:18:08.489 "uuid": "c36f492b-e621-5ead-b744-8d473cf006ef", 00:18:08.489 "is_configured": true, 00:18:08.489 "data_offset": 2048, 00:18:08.489 "data_size": 63488 00:18:08.489 } 00:18:08.489 ] 00:18:08.489 }' 00:18:08.489 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.489 10:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.058 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:09.058 10:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:09.058 [2024-07-26 10:28:21.936680] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaeed20 00:18:09.996 10:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.256 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.515 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.515 "name": "raid_bdev1", 00:18:10.515 "uuid": "eab849f3-eea5-482e-98eb-64eba05a8766", 00:18:10.515 "strip_size_kb": 64, 00:18:10.515 "state": "online", 00:18:10.515 "raid_level": "concat", 00:18:10.515 "superblock": true, 00:18:10.515 "num_base_bdevs": 3, 00:18:10.515 "num_base_bdevs_discovered": 3, 00:18:10.515 "num_base_bdevs_operational": 3, 00:18:10.515 "base_bdevs_list": [ 00:18:10.515 { 00:18:10.515 "name": "BaseBdev1", 00:18:10.515 "uuid": "c355b6fa-77aa-5927-a3f9-f16e8f1dc458", 00:18:10.515 "is_configured": true, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 }, 00:18:10.515 { 00:18:10.515 "name": "BaseBdev2", 00:18:10.515 "uuid": "cad0d3f0-77ab-556f-ae42-beb150dd91fb", 00:18:10.515 "is_configured": true, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 }, 00:18:10.515 { 00:18:10.515 "name": "BaseBdev3", 00:18:10.515 "uuid": "c36f492b-e621-5ead-b744-8d473cf006ef", 00:18:10.515 "is_configured": true, 00:18:10.515 "data_offset": 2048, 00:18:10.515 "data_size": 63488 00:18:10.515 } 00:18:10.515 ] 00:18:10.515 }' 00:18:10.515 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.515 10:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.084 10:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:11.344 [2024-07-26 10:28:24.038755] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.344 [2024-07-26 10:28:24.038792] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.344 [2024-07-26 10:28:24.041716] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.344 [2024-07-26 10:28:24.041751] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.344 [2024-07-26 10:28:24.041780] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.344 [2024-07-26 10:28:24.041790] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaec5b0 name raid_bdev1, state offline 00:18:11.344 0 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3399864 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3399864 ']' 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3399864 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3399864 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3399864' 00:18:11.344 killing process with pid 3399864 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3399864 00:18:11.344 [2024-07-26 10:28:24.117798] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.344 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3399864 00:18:11.344 [2024-07-26 10:28:24.135627] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.RKTNsP31AJ 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:18:11.604 00:18:11.604 real 0m6.557s 00:18:11.604 user 0m10.306s 00:18:11.604 sys 0m1.141s 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.604 10:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 ************************************ 00:18:11.604 END TEST raid_write_error_test 00:18:11.604 ************************************ 00:18:11.604 10:28:24 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:18:11.604 10:28:24 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:11.604 10:28:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:11.604 10:28:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.604 10:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 ************************************ 00:18:11.604 START TEST raid_state_function_test 00:18:11.604 ************************************ 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3401111 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3401111' 00:18:11.604 Process raid pid: 3401111 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3401111 /var/tmp/spdk-raid.sock 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3401111 ']' 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.604 10:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 [2024-07-26 10:28:24.490387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:18:11.604 [2024-07-26 10:28:24.490448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:11.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.864 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:11.864 [2024-07-26 10:28:24.627187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.864 [2024-07-26 10:28:24.671578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.864 [2024-07-26 10:28:24.733961] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.864 [2024-07-26 10:28:24.733997] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.801 10:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.801 10:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:12.802 [2024-07-26 10:28:25.598339] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.802 [2024-07-26 10:28:25.598380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.802 [2024-07-26 10:28:25.598391] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.802 [2024-07-26 10:28:25.598402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.802 [2024-07-26 10:28:25.598411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.802 [2024-07-26 10:28:25.598422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.802 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.061 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.061 "name": "Existed_Raid", 00:18:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.061 "strip_size_kb": 0, 00:18:13.061 "state": "configuring", 00:18:13.061 "raid_level": "raid1", 00:18:13.061 "superblock": false, 00:18:13.061 "num_base_bdevs": 3, 00:18:13.061 "num_base_bdevs_discovered": 0, 00:18:13.061 "num_base_bdevs_operational": 3, 00:18:13.061 "base_bdevs_list": [ 00:18:13.061 { 00:18:13.061 "name": "BaseBdev1", 00:18:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.061 "is_configured": false, 00:18:13.061 "data_offset": 0, 00:18:13.061 "data_size": 0 00:18:13.061 }, 00:18:13.061 { 00:18:13.061 "name": "BaseBdev2", 00:18:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.061 "is_configured": false, 00:18:13.061 "data_offset": 0, 00:18:13.061 "data_size": 0 00:18:13.061 }, 00:18:13.061 { 00:18:13.061 "name": "BaseBdev3", 00:18:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.061 "is_configured": false, 00:18:13.061 "data_offset": 0, 00:18:13.061 "data_size": 0 00:18:13.061 } 00:18:13.061 ] 00:18:13.061 }' 00:18:13.061 10:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.061 10:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.630 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.889 [2024-07-26 10:28:26.628928] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.889 [2024-07-26 10:28:26.628958] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2416b70 name Existed_Raid, state configuring 00:18:13.889 10:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:14.457 [2024-07-26 10:28:27.126242] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.457 [2024-07-26 10:28:27.126276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.457 [2024-07-26 10:28:27.126286] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.457 [2024-07-26 10:28:27.126297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.457 [2024-07-26 10:28:27.126305] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.457 [2024-07-26 10:28:27.126315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.457 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:14.717 [2024-07-26 10:28:27.372428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.717 BaseBdev1 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:14.717 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.285 10:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.285 [ 00:18:15.285 { 00:18:15.285 "name": "BaseBdev1", 00:18:15.285 "aliases": [ 00:18:15.285 "c20c98cf-58be-4fde-a16d-8017a28f9f82" 00:18:15.285 ], 00:18:15.285 "product_name": "Malloc disk", 00:18:15.285 "block_size": 512, 00:18:15.285 "num_blocks": 65536, 00:18:15.285 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:15.285 "assigned_rate_limits": { 00:18:15.285 "rw_ios_per_sec": 0, 00:18:15.285 "rw_mbytes_per_sec": 0, 00:18:15.285 "r_mbytes_per_sec": 0, 00:18:15.285 "w_mbytes_per_sec": 0 00:18:15.285 }, 00:18:15.285 "claimed": true, 00:18:15.285 "claim_type": "exclusive_write", 00:18:15.285 "zoned": false, 00:18:15.285 "supported_io_types": { 00:18:15.285 "read": true, 00:18:15.285 "write": true, 00:18:15.285 "unmap": true, 00:18:15.285 "flush": true, 00:18:15.285 "reset": true, 00:18:15.285 "nvme_admin": false, 00:18:15.285 "nvme_io": false, 00:18:15.285 "nvme_io_md": false, 00:18:15.285 "write_zeroes": true, 00:18:15.285 "zcopy": true, 00:18:15.285 "get_zone_info": false, 00:18:15.285 "zone_management": false, 00:18:15.285 "zone_append": false, 00:18:15.285 "compare": false, 00:18:15.285 "compare_and_write": false, 00:18:15.285 "abort": true, 00:18:15.285 "seek_hole": false, 00:18:15.285 "seek_data": false, 00:18:15.285 "copy": true, 00:18:15.285 "nvme_iov_md": false 00:18:15.285 }, 00:18:15.285 "memory_domains": [ 00:18:15.285 { 00:18:15.285 "dma_device_id": "system", 00:18:15.285 "dma_device_type": 1 00:18:15.285 }, 00:18:15.285 { 00:18:15.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.285 "dma_device_type": 2 00:18:15.285 } 00:18:15.285 ], 00:18:15.285 "driver_specific": {} 00:18:15.285 } 00:18:15.285 ] 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.285 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.545 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.545 "name": "Existed_Raid", 00:18:15.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.545 "strip_size_kb": 0, 00:18:15.545 "state": "configuring", 00:18:15.545 "raid_level": "raid1", 00:18:15.545 "superblock": false, 00:18:15.545 "num_base_bdevs": 3, 00:18:15.545 "num_base_bdevs_discovered": 1, 00:18:15.545 "num_base_bdevs_operational": 3, 00:18:15.545 "base_bdevs_list": [ 00:18:15.545 { 00:18:15.545 "name": "BaseBdev1", 00:18:15.545 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:15.545 "is_configured": true, 00:18:15.545 "data_offset": 0, 00:18:15.545 "data_size": 65536 00:18:15.545 }, 00:18:15.545 { 00:18:15.545 "name": "BaseBdev2", 00:18:15.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.545 "is_configured": false, 00:18:15.545 "data_offset": 0, 00:18:15.545 "data_size": 0 00:18:15.545 }, 00:18:15.545 { 00:18:15.545 "name": "BaseBdev3", 00:18:15.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.545 "is_configured": false, 00:18:15.545 "data_offset": 0, 00:18:15.545 "data_size": 0 00:18:15.545 } 00:18:15.545 ] 00:18:15.545 }' 00:18:15.545 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.545 10:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.113 10:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.372 [2024-07-26 10:28:29.141098] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.372 [2024-07-26 10:28:29.141136] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24164a0 name Existed_Raid, state configuring 00:18:16.372 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:16.631 [2024-07-26 10:28:29.373737] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.631 [2024-07-26 10:28:29.375077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.631 [2024-07-26 10:28:29.375112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.631 [2024-07-26 10:28:29.375121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.631 [2024-07-26 10:28:29.375132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.631 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.907 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.907 "name": "Existed_Raid", 00:18:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.907 "strip_size_kb": 0, 00:18:16.907 "state": "configuring", 00:18:16.907 "raid_level": "raid1", 00:18:16.907 "superblock": false, 00:18:16.907 "num_base_bdevs": 3, 00:18:16.907 "num_base_bdevs_discovered": 1, 00:18:16.907 "num_base_bdevs_operational": 3, 00:18:16.907 "base_bdevs_list": [ 00:18:16.907 { 00:18:16.907 "name": "BaseBdev1", 00:18:16.907 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:16.907 "is_configured": true, 00:18:16.907 "data_offset": 0, 00:18:16.907 "data_size": 65536 00:18:16.907 }, 00:18:16.907 { 00:18:16.907 "name": "BaseBdev2", 00:18:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.907 "is_configured": false, 00:18:16.907 "data_offset": 0, 00:18:16.907 "data_size": 0 00:18:16.907 }, 00:18:16.907 { 00:18:16.907 "name": "BaseBdev3", 00:18:16.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.907 "is_configured": false, 00:18:16.907 "data_offset": 0, 00:18:16.907 "data_size": 0 00:18:16.907 } 00:18:16.907 ] 00:18:16.907 }' 00:18:16.907 10:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.907 10:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.517 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:17.517 [2024-07-26 10:28:30.407619] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.517 BaseBdev2 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.776 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.038 [ 00:18:18.038 { 00:18:18.038 "name": "BaseBdev2", 00:18:18.038 "aliases": [ 00:18:18.038 "33ebf61f-cab0-4b70-8056-4703aa75ffb3" 00:18:18.038 ], 00:18:18.038 "product_name": "Malloc disk", 00:18:18.038 "block_size": 512, 00:18:18.038 "num_blocks": 65536, 00:18:18.038 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:18.039 "assigned_rate_limits": { 00:18:18.039 "rw_ios_per_sec": 0, 00:18:18.039 "rw_mbytes_per_sec": 0, 00:18:18.039 "r_mbytes_per_sec": 0, 00:18:18.039 "w_mbytes_per_sec": 0 00:18:18.039 }, 00:18:18.039 "claimed": true, 00:18:18.039 "claim_type": "exclusive_write", 00:18:18.039 "zoned": false, 00:18:18.039 "supported_io_types": { 00:18:18.039 "read": true, 00:18:18.039 "write": true, 00:18:18.039 "unmap": true, 00:18:18.039 "flush": true, 00:18:18.039 "reset": true, 00:18:18.039 "nvme_admin": false, 00:18:18.039 "nvme_io": false, 00:18:18.039 "nvme_io_md": false, 00:18:18.039 "write_zeroes": true, 00:18:18.039 "zcopy": true, 00:18:18.039 "get_zone_info": false, 00:18:18.039 "zone_management": false, 00:18:18.039 "zone_append": false, 00:18:18.039 "compare": false, 00:18:18.039 "compare_and_write": false, 00:18:18.039 "abort": true, 00:18:18.039 "seek_hole": false, 00:18:18.039 "seek_data": false, 00:18:18.039 "copy": true, 00:18:18.039 "nvme_iov_md": false 00:18:18.039 }, 00:18:18.039 "memory_domains": [ 00:18:18.039 { 00:18:18.039 "dma_device_id": "system", 00:18:18.039 "dma_device_type": 1 00:18:18.039 }, 00:18:18.039 { 00:18:18.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.039 "dma_device_type": 2 00:18:18.039 } 00:18:18.039 ], 00:18:18.039 "driver_specific": {} 00:18:18.039 } 00:18:18.039 ] 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.039 10:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.299 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.299 "name": "Existed_Raid", 00:18:18.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.299 "strip_size_kb": 0, 00:18:18.299 "state": "configuring", 00:18:18.299 "raid_level": "raid1", 00:18:18.299 "superblock": false, 00:18:18.299 "num_base_bdevs": 3, 00:18:18.299 "num_base_bdevs_discovered": 2, 00:18:18.299 "num_base_bdevs_operational": 3, 00:18:18.299 "base_bdevs_list": [ 00:18:18.299 { 00:18:18.299 "name": "BaseBdev1", 00:18:18.299 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:18.299 "is_configured": true, 00:18:18.299 "data_offset": 0, 00:18:18.299 "data_size": 65536 00:18:18.299 }, 00:18:18.299 { 00:18:18.299 "name": "BaseBdev2", 00:18:18.299 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:18.299 "is_configured": true, 00:18:18.299 "data_offset": 0, 00:18:18.299 "data_size": 65536 00:18:18.299 }, 00:18:18.299 { 00:18:18.299 "name": "BaseBdev3", 00:18:18.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.299 "is_configured": false, 00:18:18.299 "data_offset": 0, 00:18:18.299 "data_size": 0 00:18:18.299 } 00:18:18.299 ] 00:18:18.299 }' 00:18:18.299 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.299 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.867 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:19.126 [2024-07-26 10:28:31.882648] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.126 [2024-07-26 10:28:31.882684] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25c92d0 00:18:19.126 [2024-07-26 10:28:31.882692] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:19.126 [2024-07-26 10:28:31.882925] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241b450 00:18:19.126 [2024-07-26 10:28:31.883037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25c92d0 00:18:19.126 [2024-07-26 10:28:31.883046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25c92d0 00:18:19.126 [2024-07-26 10:28:31.883206] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.126 BaseBdev3 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:19.126 10:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:19.386 [ 00:18:19.386 { 00:18:19.386 "name": "BaseBdev3", 00:18:19.386 "aliases": [ 00:18:19.386 "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2" 00:18:19.386 ], 00:18:19.386 "product_name": "Malloc disk", 00:18:19.386 "block_size": 512, 00:18:19.386 "num_blocks": 65536, 00:18:19.386 "uuid": "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2", 00:18:19.386 "assigned_rate_limits": { 00:18:19.386 "rw_ios_per_sec": 0, 00:18:19.386 "rw_mbytes_per_sec": 0, 00:18:19.386 "r_mbytes_per_sec": 0, 00:18:19.386 "w_mbytes_per_sec": 0 00:18:19.386 }, 00:18:19.386 "claimed": true, 00:18:19.386 "claim_type": "exclusive_write", 00:18:19.386 "zoned": false, 00:18:19.386 "supported_io_types": { 00:18:19.386 "read": true, 00:18:19.386 "write": true, 00:18:19.386 "unmap": true, 00:18:19.386 "flush": true, 00:18:19.386 "reset": true, 00:18:19.386 "nvme_admin": false, 00:18:19.386 "nvme_io": false, 00:18:19.386 "nvme_io_md": false, 00:18:19.386 "write_zeroes": true, 00:18:19.386 "zcopy": true, 00:18:19.386 "get_zone_info": false, 00:18:19.386 "zone_management": false, 00:18:19.386 "zone_append": false, 00:18:19.386 "compare": false, 00:18:19.386 "compare_and_write": false, 00:18:19.386 "abort": true, 00:18:19.386 "seek_hole": false, 00:18:19.386 "seek_data": false, 00:18:19.386 "copy": true, 00:18:19.386 "nvme_iov_md": false 00:18:19.386 }, 00:18:19.386 "memory_domains": [ 00:18:19.386 { 00:18:19.386 "dma_device_id": "system", 00:18:19.386 "dma_device_type": 1 00:18:19.386 }, 00:18:19.386 { 00:18:19.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.386 "dma_device_type": 2 00:18:19.386 } 00:18:19.386 ], 00:18:19.386 "driver_specific": {} 00:18:19.386 } 00:18:19.386 ] 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.386 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.646 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.646 "name": "Existed_Raid", 00:18:19.646 "uuid": "e8ad65f7-8364-444d-a875-50af1d081980", 00:18:19.646 "strip_size_kb": 0, 00:18:19.646 "state": "online", 00:18:19.646 "raid_level": "raid1", 00:18:19.646 "superblock": false, 00:18:19.646 "num_base_bdevs": 3, 00:18:19.646 "num_base_bdevs_discovered": 3, 00:18:19.646 "num_base_bdevs_operational": 3, 00:18:19.646 "base_bdevs_list": [ 00:18:19.646 { 00:18:19.646 "name": "BaseBdev1", 00:18:19.646 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:19.646 "is_configured": true, 00:18:19.646 "data_offset": 0, 00:18:19.646 "data_size": 65536 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "name": "BaseBdev2", 00:18:19.646 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:19.646 "is_configured": true, 00:18:19.646 "data_offset": 0, 00:18:19.646 "data_size": 65536 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "name": "BaseBdev3", 00:18:19.646 "uuid": "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2", 00:18:19.646 "is_configured": true, 00:18:19.646 "data_offset": 0, 00:18:19.646 "data_size": 65536 00:18:19.646 } 00:18:19.646 ] 00:18:19.646 }' 00:18:19.646 10:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.646 10:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:20.214 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:20.474 [2024-07-26 10:28:33.206442] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:20.474 "name": "Existed_Raid", 00:18:20.474 "aliases": [ 00:18:20.474 "e8ad65f7-8364-444d-a875-50af1d081980" 00:18:20.474 ], 00:18:20.474 "product_name": "Raid Volume", 00:18:20.474 "block_size": 512, 00:18:20.474 "num_blocks": 65536, 00:18:20.474 "uuid": "e8ad65f7-8364-444d-a875-50af1d081980", 00:18:20.474 "assigned_rate_limits": { 00:18:20.474 "rw_ios_per_sec": 0, 00:18:20.474 "rw_mbytes_per_sec": 0, 00:18:20.474 "r_mbytes_per_sec": 0, 00:18:20.474 "w_mbytes_per_sec": 0 00:18:20.474 }, 00:18:20.474 "claimed": false, 00:18:20.474 "zoned": false, 00:18:20.474 "supported_io_types": { 00:18:20.474 "read": true, 00:18:20.474 "write": true, 00:18:20.474 "unmap": false, 00:18:20.474 "flush": false, 00:18:20.474 "reset": true, 00:18:20.474 "nvme_admin": false, 00:18:20.474 "nvme_io": false, 00:18:20.474 "nvme_io_md": false, 00:18:20.474 "write_zeroes": true, 00:18:20.474 "zcopy": false, 00:18:20.474 "get_zone_info": false, 00:18:20.474 "zone_management": false, 00:18:20.474 "zone_append": false, 00:18:20.474 "compare": false, 00:18:20.474 "compare_and_write": false, 00:18:20.474 "abort": false, 00:18:20.474 "seek_hole": false, 00:18:20.474 "seek_data": false, 00:18:20.474 "copy": false, 00:18:20.474 "nvme_iov_md": false 00:18:20.474 }, 00:18:20.474 "memory_domains": [ 00:18:20.474 { 00:18:20.474 "dma_device_id": "system", 00:18:20.474 "dma_device_type": 1 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.474 "dma_device_type": 2 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "dma_device_id": "system", 00:18:20.474 "dma_device_type": 1 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.474 "dma_device_type": 2 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "dma_device_id": "system", 00:18:20.474 "dma_device_type": 1 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.474 "dma_device_type": 2 00:18:20.474 } 00:18:20.474 ], 00:18:20.474 "driver_specific": { 00:18:20.474 "raid": { 00:18:20.474 "uuid": "e8ad65f7-8364-444d-a875-50af1d081980", 00:18:20.474 "strip_size_kb": 0, 00:18:20.474 "state": "online", 00:18:20.474 "raid_level": "raid1", 00:18:20.474 "superblock": false, 00:18:20.474 "num_base_bdevs": 3, 00:18:20.474 "num_base_bdevs_discovered": 3, 00:18:20.474 "num_base_bdevs_operational": 3, 00:18:20.474 "base_bdevs_list": [ 00:18:20.474 { 00:18:20.474 "name": "BaseBdev1", 00:18:20.474 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:20.474 "is_configured": true, 00:18:20.474 "data_offset": 0, 00:18:20.474 "data_size": 65536 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "name": "BaseBdev2", 00:18:20.474 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:20.474 "is_configured": true, 00:18:20.474 "data_offset": 0, 00:18:20.474 "data_size": 65536 00:18:20.474 }, 00:18:20.474 { 00:18:20.474 "name": "BaseBdev3", 00:18:20.474 "uuid": "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2", 00:18:20.474 "is_configured": true, 00:18:20.474 "data_offset": 0, 00:18:20.474 "data_size": 65536 00:18:20.474 } 00:18:20.474 ] 00:18:20.474 } 00:18:20.474 } 00:18:20.474 }' 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:20.474 BaseBdev2 00:18:20.474 BaseBdev3' 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.474 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.733 "name": "BaseBdev1", 00:18:20.733 "aliases": [ 00:18:20.733 "c20c98cf-58be-4fde-a16d-8017a28f9f82" 00:18:20.733 ], 00:18:20.733 "product_name": "Malloc disk", 00:18:20.733 "block_size": 512, 00:18:20.733 "num_blocks": 65536, 00:18:20.733 "uuid": "c20c98cf-58be-4fde-a16d-8017a28f9f82", 00:18:20.733 "assigned_rate_limits": { 00:18:20.733 "rw_ios_per_sec": 0, 00:18:20.733 "rw_mbytes_per_sec": 0, 00:18:20.733 "r_mbytes_per_sec": 0, 00:18:20.733 "w_mbytes_per_sec": 0 00:18:20.733 }, 00:18:20.733 "claimed": true, 00:18:20.733 "claim_type": "exclusive_write", 00:18:20.733 "zoned": false, 00:18:20.733 "supported_io_types": { 00:18:20.733 "read": true, 00:18:20.733 "write": true, 00:18:20.733 "unmap": true, 00:18:20.733 "flush": true, 00:18:20.733 "reset": true, 00:18:20.733 "nvme_admin": false, 00:18:20.733 "nvme_io": false, 00:18:20.733 "nvme_io_md": false, 00:18:20.733 "write_zeroes": true, 00:18:20.733 "zcopy": true, 00:18:20.733 "get_zone_info": false, 00:18:20.733 "zone_management": false, 00:18:20.733 "zone_append": false, 00:18:20.733 "compare": false, 00:18:20.733 "compare_and_write": false, 00:18:20.733 "abort": true, 00:18:20.733 "seek_hole": false, 00:18:20.733 "seek_data": false, 00:18:20.733 "copy": true, 00:18:20.733 "nvme_iov_md": false 00:18:20.733 }, 00:18:20.733 "memory_domains": [ 00:18:20.733 { 00:18:20.733 "dma_device_id": "system", 00:18:20.733 "dma_device_type": 1 00:18:20.733 }, 00:18:20.733 { 00:18:20.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.733 "dma_device_type": 2 00:18:20.733 } 00:18:20.733 ], 00:18:20.733 "driver_specific": {} 00:18:20.733 }' 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.733 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:20.990 10:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:21.248 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:21.248 "name": "BaseBdev2", 00:18:21.248 "aliases": [ 00:18:21.248 "33ebf61f-cab0-4b70-8056-4703aa75ffb3" 00:18:21.248 ], 00:18:21.248 "product_name": "Malloc disk", 00:18:21.248 "block_size": 512, 00:18:21.248 "num_blocks": 65536, 00:18:21.248 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:21.248 "assigned_rate_limits": { 00:18:21.248 "rw_ios_per_sec": 0, 00:18:21.248 "rw_mbytes_per_sec": 0, 00:18:21.248 "r_mbytes_per_sec": 0, 00:18:21.248 "w_mbytes_per_sec": 0 00:18:21.248 }, 00:18:21.248 "claimed": true, 00:18:21.248 "claim_type": "exclusive_write", 00:18:21.248 "zoned": false, 00:18:21.248 "supported_io_types": { 00:18:21.248 "read": true, 00:18:21.248 "write": true, 00:18:21.248 "unmap": true, 00:18:21.248 "flush": true, 00:18:21.248 "reset": true, 00:18:21.248 "nvme_admin": false, 00:18:21.248 "nvme_io": false, 00:18:21.248 "nvme_io_md": false, 00:18:21.248 "write_zeroes": true, 00:18:21.248 "zcopy": true, 00:18:21.248 "get_zone_info": false, 00:18:21.248 "zone_management": false, 00:18:21.248 "zone_append": false, 00:18:21.248 "compare": false, 00:18:21.248 "compare_and_write": false, 00:18:21.248 "abort": true, 00:18:21.248 "seek_hole": false, 00:18:21.248 "seek_data": false, 00:18:21.248 "copy": true, 00:18:21.248 "nvme_iov_md": false 00:18:21.248 }, 00:18:21.248 "memory_domains": [ 00:18:21.248 { 00:18:21.249 "dma_device_id": "system", 00:18:21.249 "dma_device_type": 1 00:18:21.249 }, 00:18:21.249 { 00:18:21.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.249 "dma_device_type": 2 00:18:21.249 } 00:18:21.249 ], 00:18:21.249 "driver_specific": {} 00:18:21.249 }' 00:18:21.249 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.249 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.249 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:21.249 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.249 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:21.508 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:21.767 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:21.767 "name": "BaseBdev3", 00:18:21.767 "aliases": [ 00:18:21.767 "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2" 00:18:21.767 ], 00:18:21.767 "product_name": "Malloc disk", 00:18:21.767 "block_size": 512, 00:18:21.767 "num_blocks": 65536, 00:18:21.767 "uuid": "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2", 00:18:21.767 "assigned_rate_limits": { 00:18:21.767 "rw_ios_per_sec": 0, 00:18:21.767 "rw_mbytes_per_sec": 0, 00:18:21.767 "r_mbytes_per_sec": 0, 00:18:21.767 "w_mbytes_per_sec": 0 00:18:21.767 }, 00:18:21.767 "claimed": true, 00:18:21.767 "claim_type": "exclusive_write", 00:18:21.767 "zoned": false, 00:18:21.767 "supported_io_types": { 00:18:21.767 "read": true, 00:18:21.767 "write": true, 00:18:21.767 "unmap": true, 00:18:21.767 "flush": true, 00:18:21.767 "reset": true, 00:18:21.767 "nvme_admin": false, 00:18:21.767 "nvme_io": false, 00:18:21.767 "nvme_io_md": false, 00:18:21.767 "write_zeroes": true, 00:18:21.767 "zcopy": true, 00:18:21.767 "get_zone_info": false, 00:18:21.767 "zone_management": false, 00:18:21.767 "zone_append": false, 00:18:21.767 "compare": false, 00:18:21.767 "compare_and_write": false, 00:18:21.767 "abort": true, 00:18:21.767 "seek_hole": false, 00:18:21.767 "seek_data": false, 00:18:21.767 "copy": true, 00:18:21.767 "nvme_iov_md": false 00:18:21.767 }, 00:18:21.767 "memory_domains": [ 00:18:21.767 { 00:18:21.767 "dma_device_id": "system", 00:18:21.767 "dma_device_type": 1 00:18:21.767 }, 00:18:21.767 { 00:18:21.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.767 "dma_device_type": 2 00:18:21.767 } 00:18:21.767 ], 00:18:21.767 "driver_specific": {} 00:18:21.767 }' 00:18:21.767 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.767 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.767 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:21.767 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:22.026 10:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:22.285 [2024-07-26 10:28:35.123260] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.285 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.544 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.544 "name": "Existed_Raid", 00:18:22.544 "uuid": "e8ad65f7-8364-444d-a875-50af1d081980", 00:18:22.544 "strip_size_kb": 0, 00:18:22.544 "state": "online", 00:18:22.544 "raid_level": "raid1", 00:18:22.544 "superblock": false, 00:18:22.544 "num_base_bdevs": 3, 00:18:22.544 "num_base_bdevs_discovered": 2, 00:18:22.544 "num_base_bdevs_operational": 2, 00:18:22.544 "base_bdevs_list": [ 00:18:22.544 { 00:18:22.544 "name": null, 00:18:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.544 "is_configured": false, 00:18:22.544 "data_offset": 0, 00:18:22.544 "data_size": 65536 00:18:22.544 }, 00:18:22.544 { 00:18:22.544 "name": "BaseBdev2", 00:18:22.544 "uuid": "33ebf61f-cab0-4b70-8056-4703aa75ffb3", 00:18:22.544 "is_configured": true, 00:18:22.544 "data_offset": 0, 00:18:22.544 "data_size": 65536 00:18:22.544 }, 00:18:22.544 { 00:18:22.544 "name": "BaseBdev3", 00:18:22.544 "uuid": "8b086980-da6d-48cd-bc47-f4a7a4cd3ef2", 00:18:22.544 "is_configured": true, 00:18:22.544 "data_offset": 0, 00:18:22.544 "data_size": 65536 00:18:22.544 } 00:18:22.544 ] 00:18:22.544 }' 00:18:22.544 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.544 10:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.112 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:23.112 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:23.112 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.112 10:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:23.372 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:23.372 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.372 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:23.631 [2024-07-26 10:28:36.379629] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.631 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:23.631 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:23.631 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.631 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:23.890 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:23.890 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.890 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:24.149 [2024-07-26 10:28:36.850996] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:24.149 [2024-07-26 10:28:36.851079] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.149 [2024-07-26 10:28:36.861422] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.149 [2024-07-26 10:28:36.861452] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.149 [2024-07-26 10:28:36.861462] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25c92d0 name Existed_Raid, state offline 00:18:24.149 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:24.149 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:24.149 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.149 10:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:24.407 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:24.407 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:24.407 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:24.407 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:24.407 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:24.408 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:24.666 BaseBdev2 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.666 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.926 [ 00:18:24.926 { 00:18:24.926 "name": "BaseBdev2", 00:18:24.926 "aliases": [ 00:18:24.926 "22943548-290f-4ff4-bb5f-72032b7096fd" 00:18:24.926 ], 00:18:24.926 "product_name": "Malloc disk", 00:18:24.926 "block_size": 512, 00:18:24.926 "num_blocks": 65536, 00:18:24.926 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:24.926 "assigned_rate_limits": { 00:18:24.926 "rw_ios_per_sec": 0, 00:18:24.926 "rw_mbytes_per_sec": 0, 00:18:24.926 "r_mbytes_per_sec": 0, 00:18:24.926 "w_mbytes_per_sec": 0 00:18:24.926 }, 00:18:24.926 "claimed": false, 00:18:24.926 "zoned": false, 00:18:24.926 "supported_io_types": { 00:18:24.926 "read": true, 00:18:24.926 "write": true, 00:18:24.926 "unmap": true, 00:18:24.926 "flush": true, 00:18:24.926 "reset": true, 00:18:24.926 "nvme_admin": false, 00:18:24.926 "nvme_io": false, 00:18:24.926 "nvme_io_md": false, 00:18:24.926 "write_zeroes": true, 00:18:24.926 "zcopy": true, 00:18:24.926 "get_zone_info": false, 00:18:24.926 "zone_management": false, 00:18:24.926 "zone_append": false, 00:18:24.926 "compare": false, 00:18:24.926 "compare_and_write": false, 00:18:24.926 "abort": true, 00:18:24.926 "seek_hole": false, 00:18:24.926 "seek_data": false, 00:18:24.926 "copy": true, 00:18:24.926 "nvme_iov_md": false 00:18:24.926 }, 00:18:24.926 "memory_domains": [ 00:18:24.926 { 00:18:24.926 "dma_device_id": "system", 00:18:24.926 "dma_device_type": 1 00:18:24.926 }, 00:18:24.926 { 00:18:24.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.926 "dma_device_type": 2 00:18:24.926 } 00:18:24.926 ], 00:18:24.926 "driver_specific": {} 00:18:24.926 } 00:18:24.926 ] 00:18:24.926 10:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:24.926 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:24.926 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:24.926 10:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:25.185 BaseBdev3 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.185 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.444 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:25.703 [ 00:18:25.703 { 00:18:25.703 "name": "BaseBdev3", 00:18:25.703 "aliases": [ 00:18:25.703 "c0ec45e2-2801-4b09-8500-05ed3e314aa5" 00:18:25.703 ], 00:18:25.703 "product_name": "Malloc disk", 00:18:25.703 "block_size": 512, 00:18:25.703 "num_blocks": 65536, 00:18:25.703 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:25.703 "assigned_rate_limits": { 00:18:25.703 "rw_ios_per_sec": 0, 00:18:25.703 "rw_mbytes_per_sec": 0, 00:18:25.703 "r_mbytes_per_sec": 0, 00:18:25.703 "w_mbytes_per_sec": 0 00:18:25.703 }, 00:18:25.703 "claimed": false, 00:18:25.703 "zoned": false, 00:18:25.703 "supported_io_types": { 00:18:25.703 "read": true, 00:18:25.703 "write": true, 00:18:25.703 "unmap": true, 00:18:25.703 "flush": true, 00:18:25.703 "reset": true, 00:18:25.703 "nvme_admin": false, 00:18:25.703 "nvme_io": false, 00:18:25.703 "nvme_io_md": false, 00:18:25.703 "write_zeroes": true, 00:18:25.703 "zcopy": true, 00:18:25.703 "get_zone_info": false, 00:18:25.703 "zone_management": false, 00:18:25.703 "zone_append": false, 00:18:25.703 "compare": false, 00:18:25.703 "compare_and_write": false, 00:18:25.703 "abort": true, 00:18:25.703 "seek_hole": false, 00:18:25.703 "seek_data": false, 00:18:25.703 "copy": true, 00:18:25.703 "nvme_iov_md": false 00:18:25.703 }, 00:18:25.703 "memory_domains": [ 00:18:25.703 { 00:18:25.703 "dma_device_id": "system", 00:18:25.703 "dma_device_type": 1 00:18:25.703 }, 00:18:25.703 { 00:18:25.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.703 "dma_device_type": 2 00:18:25.703 } 00:18:25.703 ], 00:18:25.703 "driver_specific": {} 00:18:25.703 } 00:18:25.703 ] 00:18:25.703 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:25.703 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:25.703 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:25.703 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:25.963 [2024-07-26 10:28:38.690526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.963 [2024-07-26 10:28:38.690566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.963 [2024-07-26 10:28:38.690585] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.963 [2024-07-26 10:28:38.691806] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.963 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.222 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.222 "name": "Existed_Raid", 00:18:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.222 "strip_size_kb": 0, 00:18:26.222 "state": "configuring", 00:18:26.222 "raid_level": "raid1", 00:18:26.222 "superblock": false, 00:18:26.222 "num_base_bdevs": 3, 00:18:26.222 "num_base_bdevs_discovered": 2, 00:18:26.222 "num_base_bdevs_operational": 3, 00:18:26.222 "base_bdevs_list": [ 00:18:26.222 { 00:18:26.222 "name": "BaseBdev1", 00:18:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.222 "is_configured": false, 00:18:26.222 "data_offset": 0, 00:18:26.222 "data_size": 0 00:18:26.222 }, 00:18:26.222 { 00:18:26.222 "name": "BaseBdev2", 00:18:26.222 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:26.222 "is_configured": true, 00:18:26.222 "data_offset": 0, 00:18:26.222 "data_size": 65536 00:18:26.222 }, 00:18:26.222 { 00:18:26.222 "name": "BaseBdev3", 00:18:26.222 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:26.222 "is_configured": true, 00:18:26.222 "data_offset": 0, 00:18:26.222 "data_size": 65536 00:18:26.222 } 00:18:26.222 ] 00:18:26.222 }' 00:18:26.222 10:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.222 10:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.790 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:27.050 [2024-07-26 10:28:39.693145] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.050 "name": "Existed_Raid", 00:18:27.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.050 "strip_size_kb": 0, 00:18:27.050 "state": "configuring", 00:18:27.050 "raid_level": "raid1", 00:18:27.050 "superblock": false, 00:18:27.050 "num_base_bdevs": 3, 00:18:27.050 "num_base_bdevs_discovered": 1, 00:18:27.050 "num_base_bdevs_operational": 3, 00:18:27.050 "base_bdevs_list": [ 00:18:27.050 { 00:18:27.050 "name": "BaseBdev1", 00:18:27.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.050 "is_configured": false, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 0 00:18:27.050 }, 00:18:27.050 { 00:18:27.050 "name": null, 00:18:27.050 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:27.050 "is_configured": false, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 65536 00:18:27.050 }, 00:18:27.050 { 00:18:27.050 "name": "BaseBdev3", 00:18:27.050 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:27.050 "is_configured": true, 00:18:27.050 "data_offset": 0, 00:18:27.050 "data_size": 65536 00:18:27.050 } 00:18:27.050 ] 00:18:27.050 }' 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.050 10:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.987 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.987 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.987 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:27.987 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.246 [2024-07-26 10:28:40.971589] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.246 BaseBdev1 00:18:28.246 10:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:28.246 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:28.247 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.247 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:28.247 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.247 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.247 10:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.506 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.765 [ 00:18:28.765 { 00:18:28.765 "name": "BaseBdev1", 00:18:28.765 "aliases": [ 00:18:28.765 "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59" 00:18:28.765 ], 00:18:28.765 "product_name": "Malloc disk", 00:18:28.765 "block_size": 512, 00:18:28.765 "num_blocks": 65536, 00:18:28.765 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:28.765 "assigned_rate_limits": { 00:18:28.765 "rw_ios_per_sec": 0, 00:18:28.765 "rw_mbytes_per_sec": 0, 00:18:28.765 "r_mbytes_per_sec": 0, 00:18:28.765 "w_mbytes_per_sec": 0 00:18:28.765 }, 00:18:28.765 "claimed": true, 00:18:28.765 "claim_type": "exclusive_write", 00:18:28.765 "zoned": false, 00:18:28.765 "supported_io_types": { 00:18:28.765 "read": true, 00:18:28.765 "write": true, 00:18:28.765 "unmap": true, 00:18:28.765 "flush": true, 00:18:28.765 "reset": true, 00:18:28.765 "nvme_admin": false, 00:18:28.765 "nvme_io": false, 00:18:28.765 "nvme_io_md": false, 00:18:28.765 "write_zeroes": true, 00:18:28.765 "zcopy": true, 00:18:28.765 "get_zone_info": false, 00:18:28.765 "zone_management": false, 00:18:28.765 "zone_append": false, 00:18:28.765 "compare": false, 00:18:28.765 "compare_and_write": false, 00:18:28.765 "abort": true, 00:18:28.765 "seek_hole": false, 00:18:28.765 "seek_data": false, 00:18:28.765 "copy": true, 00:18:28.765 "nvme_iov_md": false 00:18:28.765 }, 00:18:28.765 "memory_domains": [ 00:18:28.765 { 00:18:28.765 "dma_device_id": "system", 00:18:28.765 "dma_device_type": 1 00:18:28.765 }, 00:18:28.765 { 00:18:28.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.765 "dma_device_type": 2 00:18:28.765 } 00:18:28.765 ], 00:18:28.765 "driver_specific": {} 00:18:28.765 } 00:18:28.765 ] 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.765 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.766 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.766 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.025 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.025 "name": "Existed_Raid", 00:18:29.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.025 "strip_size_kb": 0, 00:18:29.025 "state": "configuring", 00:18:29.025 "raid_level": "raid1", 00:18:29.025 "superblock": false, 00:18:29.025 "num_base_bdevs": 3, 00:18:29.025 "num_base_bdevs_discovered": 2, 00:18:29.025 "num_base_bdevs_operational": 3, 00:18:29.025 "base_bdevs_list": [ 00:18:29.025 { 00:18:29.025 "name": "BaseBdev1", 00:18:29.025 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:29.025 "is_configured": true, 00:18:29.025 "data_offset": 0, 00:18:29.025 "data_size": 65536 00:18:29.025 }, 00:18:29.025 { 00:18:29.025 "name": null, 00:18:29.025 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:29.025 "is_configured": false, 00:18:29.025 "data_offset": 0, 00:18:29.025 "data_size": 65536 00:18:29.025 }, 00:18:29.025 { 00:18:29.025 "name": "BaseBdev3", 00:18:29.025 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:29.025 "is_configured": true, 00:18:29.025 "data_offset": 0, 00:18:29.025 "data_size": 65536 00:18:29.025 } 00:18:29.025 ] 00:18:29.025 }' 00:18:29.025 10:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.025 10:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.593 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.593 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.593 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:29.593 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:29.853 [2024-07-26 10:28:42.696155] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.853 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.159 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.159 "name": "Existed_Raid", 00:18:30.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.159 "strip_size_kb": 0, 00:18:30.159 "state": "configuring", 00:18:30.159 "raid_level": "raid1", 00:18:30.159 "superblock": false, 00:18:30.159 "num_base_bdevs": 3, 00:18:30.159 "num_base_bdevs_discovered": 1, 00:18:30.159 "num_base_bdevs_operational": 3, 00:18:30.159 "base_bdevs_list": [ 00:18:30.159 { 00:18:30.159 "name": "BaseBdev1", 00:18:30.159 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:30.159 "is_configured": true, 00:18:30.159 "data_offset": 0, 00:18:30.159 "data_size": 65536 00:18:30.159 }, 00:18:30.159 { 00:18:30.159 "name": null, 00:18:30.159 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:30.159 "is_configured": false, 00:18:30.159 "data_offset": 0, 00:18:30.159 "data_size": 65536 00:18:30.159 }, 00:18:30.159 { 00:18:30.159 "name": null, 00:18:30.159 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:30.159 "is_configured": false, 00:18:30.159 "data_offset": 0, 00:18:30.159 "data_size": 65536 00:18:30.159 } 00:18:30.159 ] 00:18:30.159 }' 00:18:30.160 10:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.160 10:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.748 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.748 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.006 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:31.006 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:31.264 [2024-07-26 10:28:43.943563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.264 10:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.523 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.523 "name": "Existed_Raid", 00:18:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.523 "strip_size_kb": 0, 00:18:31.523 "state": "configuring", 00:18:31.523 "raid_level": "raid1", 00:18:31.523 "superblock": false, 00:18:31.523 "num_base_bdevs": 3, 00:18:31.523 "num_base_bdevs_discovered": 2, 00:18:31.523 "num_base_bdevs_operational": 3, 00:18:31.523 "base_bdevs_list": [ 00:18:31.523 { 00:18:31.523 "name": "BaseBdev1", 00:18:31.523 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:31.523 "is_configured": true, 00:18:31.523 "data_offset": 0, 00:18:31.523 "data_size": 65536 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "name": null, 00:18:31.523 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:31.523 "is_configured": false, 00:18:31.523 "data_offset": 0, 00:18:31.523 "data_size": 65536 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "name": "BaseBdev3", 00:18:31.523 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:31.523 "is_configured": true, 00:18:31.523 "data_offset": 0, 00:18:31.523 "data_size": 65536 00:18:31.523 } 00:18:31.523 ] 00:18:31.523 }' 00:18:31.523 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.523 10:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.091 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.091 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:32.091 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:32.091 10:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.351 [2024-07-26 10:28:45.194882] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.351 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.611 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.611 "name": "Existed_Raid", 00:18:32.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.611 "strip_size_kb": 0, 00:18:32.611 "state": "configuring", 00:18:32.611 "raid_level": "raid1", 00:18:32.611 "superblock": false, 00:18:32.611 "num_base_bdevs": 3, 00:18:32.611 "num_base_bdevs_discovered": 1, 00:18:32.611 "num_base_bdevs_operational": 3, 00:18:32.611 "base_bdevs_list": [ 00:18:32.611 { 00:18:32.611 "name": null, 00:18:32.611 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:32.611 "is_configured": false, 00:18:32.611 "data_offset": 0, 00:18:32.611 "data_size": 65536 00:18:32.611 }, 00:18:32.611 { 00:18:32.611 "name": null, 00:18:32.611 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:32.611 "is_configured": false, 00:18:32.611 "data_offset": 0, 00:18:32.611 "data_size": 65536 00:18:32.611 }, 00:18:32.611 { 00:18:32.611 "name": "BaseBdev3", 00:18:32.611 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:32.611 "is_configured": true, 00:18:32.611 "data_offset": 0, 00:18:32.611 "data_size": 65536 00:18:32.611 } 00:18:32.611 ] 00:18:32.611 }' 00:18:32.611 10:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.611 10:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.180 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.180 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:33.439 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:33.439 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:33.698 [2024-07-26 10:28:46.456378] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.699 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.957 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.957 "name": "Existed_Raid", 00:18:33.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.957 "strip_size_kb": 0, 00:18:33.957 "state": "configuring", 00:18:33.957 "raid_level": "raid1", 00:18:33.957 "superblock": false, 00:18:33.957 "num_base_bdevs": 3, 00:18:33.957 "num_base_bdevs_discovered": 2, 00:18:33.957 "num_base_bdevs_operational": 3, 00:18:33.957 "base_bdevs_list": [ 00:18:33.957 { 00:18:33.957 "name": null, 00:18:33.957 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:33.957 "is_configured": false, 00:18:33.957 "data_offset": 0, 00:18:33.957 "data_size": 65536 00:18:33.957 }, 00:18:33.957 { 00:18:33.958 "name": "BaseBdev2", 00:18:33.958 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:33.958 "is_configured": true, 00:18:33.958 "data_offset": 0, 00:18:33.958 "data_size": 65536 00:18:33.958 }, 00:18:33.958 { 00:18:33.958 "name": "BaseBdev3", 00:18:33.958 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:33.958 "is_configured": true, 00:18:33.958 "data_offset": 0, 00:18:33.958 "data_size": 65536 00:18:33.958 } 00:18:33.958 ] 00:18:33.958 }' 00:18:33.958 10:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.958 10:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.525 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.525 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:34.785 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:34.785 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.785 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:35.044 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5a54eee2-f1fd-4787-98bf-7a4a0eac3d59 00:18:35.044 [2024-07-26 10:28:47.931319] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:35.044 [2024-07-26 10:28:47.931353] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x241adc0 00:18:35.044 [2024-07-26 10:28:47.931361] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:35.044 [2024-07-26 10:28:47.931537] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241f640 00:18:35.044 [2024-07-26 10:28:47.931651] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x241adc0 00:18:35.044 [2024-07-26 10:28:47.931660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x241adc0 00:18:35.044 [2024-07-26 10:28:47.931807] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.044 NewBaseBdev 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:35.304 10:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.304 10:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:35.563 [ 00:18:35.563 { 00:18:35.563 "name": "NewBaseBdev", 00:18:35.563 "aliases": [ 00:18:35.563 "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59" 00:18:35.563 ], 00:18:35.563 "product_name": "Malloc disk", 00:18:35.563 "block_size": 512, 00:18:35.563 "num_blocks": 65536, 00:18:35.563 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:35.563 "assigned_rate_limits": { 00:18:35.563 "rw_ios_per_sec": 0, 00:18:35.563 "rw_mbytes_per_sec": 0, 00:18:35.563 "r_mbytes_per_sec": 0, 00:18:35.563 "w_mbytes_per_sec": 0 00:18:35.563 }, 00:18:35.563 "claimed": true, 00:18:35.563 "claim_type": "exclusive_write", 00:18:35.563 "zoned": false, 00:18:35.563 "supported_io_types": { 00:18:35.563 "read": true, 00:18:35.563 "write": true, 00:18:35.564 "unmap": true, 00:18:35.564 "flush": true, 00:18:35.564 "reset": true, 00:18:35.564 "nvme_admin": false, 00:18:35.564 "nvme_io": false, 00:18:35.564 "nvme_io_md": false, 00:18:35.564 "write_zeroes": true, 00:18:35.564 "zcopy": true, 00:18:35.564 "get_zone_info": false, 00:18:35.564 "zone_management": false, 00:18:35.564 "zone_append": false, 00:18:35.564 "compare": false, 00:18:35.564 "compare_and_write": false, 00:18:35.564 "abort": true, 00:18:35.564 "seek_hole": false, 00:18:35.564 "seek_data": false, 00:18:35.564 "copy": true, 00:18:35.564 "nvme_iov_md": false 00:18:35.564 }, 00:18:35.564 "memory_domains": [ 00:18:35.564 { 00:18:35.564 "dma_device_id": "system", 00:18:35.564 "dma_device_type": 1 00:18:35.564 }, 00:18:35.564 { 00:18:35.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.564 "dma_device_type": 2 00:18:35.564 } 00:18:35.564 ], 00:18:35.564 "driver_specific": {} 00:18:35.564 } 00:18:35.564 ] 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.564 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.823 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.823 "name": "Existed_Raid", 00:18:35.823 "uuid": "5cf536ac-1f59-446a-b90f-be1ee03fcfac", 00:18:35.823 "strip_size_kb": 0, 00:18:35.823 "state": "online", 00:18:35.823 "raid_level": "raid1", 00:18:35.823 "superblock": false, 00:18:35.823 "num_base_bdevs": 3, 00:18:35.823 "num_base_bdevs_discovered": 3, 00:18:35.823 "num_base_bdevs_operational": 3, 00:18:35.823 "base_bdevs_list": [ 00:18:35.823 { 00:18:35.823 "name": "NewBaseBdev", 00:18:35.823 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:35.823 "is_configured": true, 00:18:35.823 "data_offset": 0, 00:18:35.823 "data_size": 65536 00:18:35.823 }, 00:18:35.823 { 00:18:35.823 "name": "BaseBdev2", 00:18:35.823 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:35.823 "is_configured": true, 00:18:35.823 "data_offset": 0, 00:18:35.823 "data_size": 65536 00:18:35.823 }, 00:18:35.823 { 00:18:35.823 "name": "BaseBdev3", 00:18:35.823 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:35.823 "is_configured": true, 00:18:35.823 "data_offset": 0, 00:18:35.823 "data_size": 65536 00:18:35.823 } 00:18:35.823 ] 00:18:35.823 }' 00:18:35.823 10:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.823 10:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:36.391 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:36.650 [2024-07-26 10:28:49.375392] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.650 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:36.650 "name": "Existed_Raid", 00:18:36.650 "aliases": [ 00:18:36.650 "5cf536ac-1f59-446a-b90f-be1ee03fcfac" 00:18:36.650 ], 00:18:36.650 "product_name": "Raid Volume", 00:18:36.650 "block_size": 512, 00:18:36.650 "num_blocks": 65536, 00:18:36.650 "uuid": "5cf536ac-1f59-446a-b90f-be1ee03fcfac", 00:18:36.650 "assigned_rate_limits": { 00:18:36.650 "rw_ios_per_sec": 0, 00:18:36.650 "rw_mbytes_per_sec": 0, 00:18:36.650 "r_mbytes_per_sec": 0, 00:18:36.650 "w_mbytes_per_sec": 0 00:18:36.650 }, 00:18:36.650 "claimed": false, 00:18:36.650 "zoned": false, 00:18:36.650 "supported_io_types": { 00:18:36.650 "read": true, 00:18:36.650 "write": true, 00:18:36.650 "unmap": false, 00:18:36.650 "flush": false, 00:18:36.650 "reset": true, 00:18:36.650 "nvme_admin": false, 00:18:36.650 "nvme_io": false, 00:18:36.650 "nvme_io_md": false, 00:18:36.650 "write_zeroes": true, 00:18:36.650 "zcopy": false, 00:18:36.650 "get_zone_info": false, 00:18:36.651 "zone_management": false, 00:18:36.651 "zone_append": false, 00:18:36.651 "compare": false, 00:18:36.651 "compare_and_write": false, 00:18:36.651 "abort": false, 00:18:36.651 "seek_hole": false, 00:18:36.651 "seek_data": false, 00:18:36.651 "copy": false, 00:18:36.651 "nvme_iov_md": false 00:18:36.651 }, 00:18:36.651 "memory_domains": [ 00:18:36.651 { 00:18:36.651 "dma_device_id": "system", 00:18:36.651 "dma_device_type": 1 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.651 "dma_device_type": 2 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "dma_device_id": "system", 00:18:36.651 "dma_device_type": 1 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.651 "dma_device_type": 2 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "dma_device_id": "system", 00:18:36.651 "dma_device_type": 1 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.651 "dma_device_type": 2 00:18:36.651 } 00:18:36.651 ], 00:18:36.651 "driver_specific": { 00:18:36.651 "raid": { 00:18:36.651 "uuid": "5cf536ac-1f59-446a-b90f-be1ee03fcfac", 00:18:36.651 "strip_size_kb": 0, 00:18:36.651 "state": "online", 00:18:36.651 "raid_level": "raid1", 00:18:36.651 "superblock": false, 00:18:36.651 "num_base_bdevs": 3, 00:18:36.651 "num_base_bdevs_discovered": 3, 00:18:36.651 "num_base_bdevs_operational": 3, 00:18:36.651 "base_bdevs_list": [ 00:18:36.651 { 00:18:36.651 "name": "NewBaseBdev", 00:18:36.651 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:36.651 "is_configured": true, 00:18:36.651 "data_offset": 0, 00:18:36.651 "data_size": 65536 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "name": "BaseBdev2", 00:18:36.651 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:36.651 "is_configured": true, 00:18:36.651 "data_offset": 0, 00:18:36.651 "data_size": 65536 00:18:36.651 }, 00:18:36.651 { 00:18:36.651 "name": "BaseBdev3", 00:18:36.651 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:36.651 "is_configured": true, 00:18:36.651 "data_offset": 0, 00:18:36.651 "data_size": 65536 00:18:36.651 } 00:18:36.651 ] 00:18:36.651 } 00:18:36.651 } 00:18:36.651 }' 00:18:36.651 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.651 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:36.651 BaseBdev2 00:18:36.651 BaseBdev3' 00:18:36.651 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.651 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:36.651 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:36.910 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:36.910 "name": "NewBaseBdev", 00:18:36.910 "aliases": [ 00:18:36.910 "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59" 00:18:36.910 ], 00:18:36.910 "product_name": "Malloc disk", 00:18:36.910 "block_size": 512, 00:18:36.910 "num_blocks": 65536, 00:18:36.910 "uuid": "5a54eee2-f1fd-4787-98bf-7a4a0eac3d59", 00:18:36.910 "assigned_rate_limits": { 00:18:36.910 "rw_ios_per_sec": 0, 00:18:36.910 "rw_mbytes_per_sec": 0, 00:18:36.910 "r_mbytes_per_sec": 0, 00:18:36.910 "w_mbytes_per_sec": 0 00:18:36.910 }, 00:18:36.910 "claimed": true, 00:18:36.910 "claim_type": "exclusive_write", 00:18:36.910 "zoned": false, 00:18:36.910 "supported_io_types": { 00:18:36.910 "read": true, 00:18:36.910 "write": true, 00:18:36.910 "unmap": true, 00:18:36.910 "flush": true, 00:18:36.910 "reset": true, 00:18:36.910 "nvme_admin": false, 00:18:36.910 "nvme_io": false, 00:18:36.910 "nvme_io_md": false, 00:18:36.910 "write_zeroes": true, 00:18:36.910 "zcopy": true, 00:18:36.910 "get_zone_info": false, 00:18:36.910 "zone_management": false, 00:18:36.910 "zone_append": false, 00:18:36.910 "compare": false, 00:18:36.910 "compare_and_write": false, 00:18:36.910 "abort": true, 00:18:36.910 "seek_hole": false, 00:18:36.910 "seek_data": false, 00:18:36.910 "copy": true, 00:18:36.910 "nvme_iov_md": false 00:18:36.910 }, 00:18:36.910 "memory_domains": [ 00:18:36.910 { 00:18:36.910 "dma_device_id": "system", 00:18:36.910 "dma_device_type": 1 00:18:36.910 }, 00:18:36.910 { 00:18:36.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.910 "dma_device_type": 2 00:18:36.910 } 00:18:36.910 ], 00:18:36.911 "driver_specific": {} 00:18:36.911 }' 00:18:36.911 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.911 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:36.911 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:36.911 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:36.911 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.170 10:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.170 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.170 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.170 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:37.170 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.429 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.429 "name": "BaseBdev2", 00:18:37.429 "aliases": [ 00:18:37.429 "22943548-290f-4ff4-bb5f-72032b7096fd" 00:18:37.429 ], 00:18:37.429 "product_name": "Malloc disk", 00:18:37.429 "block_size": 512, 00:18:37.429 "num_blocks": 65536, 00:18:37.429 "uuid": "22943548-290f-4ff4-bb5f-72032b7096fd", 00:18:37.429 "assigned_rate_limits": { 00:18:37.429 "rw_ios_per_sec": 0, 00:18:37.429 "rw_mbytes_per_sec": 0, 00:18:37.429 "r_mbytes_per_sec": 0, 00:18:37.429 "w_mbytes_per_sec": 0 00:18:37.429 }, 00:18:37.429 "claimed": true, 00:18:37.429 "claim_type": "exclusive_write", 00:18:37.429 "zoned": false, 00:18:37.429 "supported_io_types": { 00:18:37.429 "read": true, 00:18:37.429 "write": true, 00:18:37.429 "unmap": true, 00:18:37.429 "flush": true, 00:18:37.429 "reset": true, 00:18:37.429 "nvme_admin": false, 00:18:37.429 "nvme_io": false, 00:18:37.429 "nvme_io_md": false, 00:18:37.429 "write_zeroes": true, 00:18:37.429 "zcopy": true, 00:18:37.429 "get_zone_info": false, 00:18:37.429 "zone_management": false, 00:18:37.429 "zone_append": false, 00:18:37.429 "compare": false, 00:18:37.429 "compare_and_write": false, 00:18:37.429 "abort": true, 00:18:37.429 "seek_hole": false, 00:18:37.429 "seek_data": false, 00:18:37.429 "copy": true, 00:18:37.429 "nvme_iov_md": false 00:18:37.429 }, 00:18:37.429 "memory_domains": [ 00:18:37.429 { 00:18:37.429 "dma_device_id": "system", 00:18:37.429 "dma_device_type": 1 00:18:37.429 }, 00:18:37.429 { 00:18:37.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.429 "dma_device_type": 2 00:18:37.429 } 00:18:37.429 ], 00:18:37.429 "driver_specific": {} 00:18:37.429 }' 00:18:37.429 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.429 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.688 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.947 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.947 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.947 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.947 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:37.947 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.947 "name": "BaseBdev3", 00:18:37.947 "aliases": [ 00:18:37.947 "c0ec45e2-2801-4b09-8500-05ed3e314aa5" 00:18:37.947 ], 00:18:37.947 "product_name": "Malloc disk", 00:18:37.947 "block_size": 512, 00:18:37.947 "num_blocks": 65536, 00:18:37.947 "uuid": "c0ec45e2-2801-4b09-8500-05ed3e314aa5", 00:18:37.947 "assigned_rate_limits": { 00:18:37.947 "rw_ios_per_sec": 0, 00:18:37.947 "rw_mbytes_per_sec": 0, 00:18:37.947 "r_mbytes_per_sec": 0, 00:18:37.948 "w_mbytes_per_sec": 0 00:18:37.948 }, 00:18:37.948 "claimed": true, 00:18:37.948 "claim_type": "exclusive_write", 00:18:37.948 "zoned": false, 00:18:37.948 "supported_io_types": { 00:18:37.948 "read": true, 00:18:37.948 "write": true, 00:18:37.948 "unmap": true, 00:18:37.948 "flush": true, 00:18:37.948 "reset": true, 00:18:37.948 "nvme_admin": false, 00:18:37.948 "nvme_io": false, 00:18:37.948 "nvme_io_md": false, 00:18:37.948 "write_zeroes": true, 00:18:37.948 "zcopy": true, 00:18:37.948 "get_zone_info": false, 00:18:37.948 "zone_management": false, 00:18:37.948 "zone_append": false, 00:18:37.948 "compare": false, 00:18:37.948 "compare_and_write": false, 00:18:37.948 "abort": true, 00:18:37.948 "seek_hole": false, 00:18:37.948 "seek_data": false, 00:18:37.948 "copy": true, 00:18:37.948 "nvme_iov_md": false 00:18:37.948 }, 00:18:37.948 "memory_domains": [ 00:18:37.948 { 00:18:37.948 "dma_device_id": "system", 00:18:37.948 "dma_device_type": 1 00:18:37.948 }, 00:18:37.948 { 00:18:37.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.948 "dma_device_type": 2 00:18:37.948 } 00:18:37.948 ], 00:18:37.948 "driver_specific": {} 00:18:37.948 }' 00:18:37.948 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.206 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.206 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.206 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.206 10:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.206 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.206 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.206 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.206 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.206 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.465 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.465 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.465 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:38.724 [2024-07-26 10:28:51.368384] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.724 [2024-07-26 10:28:51.368410] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.724 [2024-07-26 10:28:51.368455] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.724 [2024-07-26 10:28:51.368683] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.724 [2024-07-26 10:28:51.368694] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241adc0 name Existed_Raid, state offline 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3401111 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3401111 ']' 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3401111 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401111 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401111' 00:18:38.724 killing process with pid 3401111 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3401111 00:18:38.724 [2024-07-26 10:28:51.447388] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.724 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3401111 00:18:38.724 [2024-07-26 10:28:51.470635] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:38.983 00:18:38.983 real 0m27.226s 00:18:38.983 user 0m49.856s 00:18:38.983 sys 0m5.022s 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.983 ************************************ 00:18:38.983 END TEST raid_state_function_test 00:18:38.983 ************************************ 00:18:38.983 10:28:51 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:38.983 10:28:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:38.983 10:28:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.983 10:28:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.983 ************************************ 00:18:38.983 START TEST raid_state_function_test_sb 00:18:38.983 ************************************ 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3406377 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3406377' 00:18:38.983 Process raid pid: 3406377 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3406377 /var/tmp/spdk-raid.sock 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3406377 ']' 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.983 10:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.983 [2024-07-26 10:28:51.804059] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:18:38.983 [2024-07-26 10:28:51.804117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:38.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.983 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:38.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:38.984 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:39.242 [2024-07-26 10:28:51.940058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.242 [2024-07-26 10:28:51.983999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.242 [2024-07-26 10:28:52.045469] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.242 [2024-07-26 10:28:52.045497] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.811 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.811 10:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:39.811 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.069 [2024-07-26 10:28:52.913600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.069 [2024-07-26 10:28:52.913640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.069 [2024-07-26 10:28:52.913650] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.069 [2024-07-26 10:28:52.913662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.070 [2024-07-26 10:28:52.913670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.070 [2024-07-26 10:28:52.913680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.070 10:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.329 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.329 "name": "Existed_Raid", 00:18:40.329 "uuid": "92fbe655-ed07-4214-99c3-d5519823282b", 00:18:40.329 "strip_size_kb": 0, 00:18:40.329 "state": "configuring", 00:18:40.329 "raid_level": "raid1", 00:18:40.329 "superblock": true, 00:18:40.329 "num_base_bdevs": 3, 00:18:40.329 "num_base_bdevs_discovered": 0, 00:18:40.329 "num_base_bdevs_operational": 3, 00:18:40.329 "base_bdevs_list": [ 00:18:40.329 { 00:18:40.329 "name": "BaseBdev1", 00:18:40.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.329 "is_configured": false, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 0 00:18:40.329 }, 00:18:40.329 { 00:18:40.329 "name": "BaseBdev2", 00:18:40.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.329 "is_configured": false, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 0 00:18:40.329 }, 00:18:40.329 { 00:18:40.329 "name": "BaseBdev3", 00:18:40.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.329 "is_configured": false, 00:18:40.329 "data_offset": 0, 00:18:40.329 "data_size": 0 00:18:40.329 } 00:18:40.329 ] 00:18:40.329 }' 00:18:40.329 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.329 10:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.898 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:41.156 [2024-07-26 10:28:53.952204] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.156 [2024-07-26 10:28:53.952236] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb37b70 name Existed_Raid, state configuring 00:18:41.156 10:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:41.415 [2024-07-26 10:28:54.176802] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.415 [2024-07-26 10:28:54.176827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.415 [2024-07-26 10:28:54.176836] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.415 [2024-07-26 10:28:54.176846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.415 [2024-07-26 10:28:54.176855] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.415 [2024-07-26 10:28:54.176865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.415 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:41.673 [2024-07-26 10:28:54.414876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.673 BaseBdev1 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:41.673 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.932 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:42.191 [ 00:18:42.191 { 00:18:42.191 "name": "BaseBdev1", 00:18:42.191 "aliases": [ 00:18:42.191 "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a" 00:18:42.191 ], 00:18:42.191 "product_name": "Malloc disk", 00:18:42.191 "block_size": 512, 00:18:42.191 "num_blocks": 65536, 00:18:42.191 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:42.191 "assigned_rate_limits": { 00:18:42.191 "rw_ios_per_sec": 0, 00:18:42.191 "rw_mbytes_per_sec": 0, 00:18:42.191 "r_mbytes_per_sec": 0, 00:18:42.191 "w_mbytes_per_sec": 0 00:18:42.191 }, 00:18:42.191 "claimed": true, 00:18:42.191 "claim_type": "exclusive_write", 00:18:42.191 "zoned": false, 00:18:42.191 "supported_io_types": { 00:18:42.191 "read": true, 00:18:42.191 "write": true, 00:18:42.191 "unmap": true, 00:18:42.191 "flush": true, 00:18:42.191 "reset": true, 00:18:42.191 "nvme_admin": false, 00:18:42.191 "nvme_io": false, 00:18:42.191 "nvme_io_md": false, 00:18:42.191 "write_zeroes": true, 00:18:42.191 "zcopy": true, 00:18:42.191 "get_zone_info": false, 00:18:42.191 "zone_management": false, 00:18:42.191 "zone_append": false, 00:18:42.191 "compare": false, 00:18:42.191 "compare_and_write": false, 00:18:42.191 "abort": true, 00:18:42.191 "seek_hole": false, 00:18:42.191 "seek_data": false, 00:18:42.191 "copy": true, 00:18:42.191 "nvme_iov_md": false 00:18:42.191 }, 00:18:42.191 "memory_domains": [ 00:18:42.191 { 00:18:42.191 "dma_device_id": "system", 00:18:42.191 "dma_device_type": 1 00:18:42.191 }, 00:18:42.191 { 00:18:42.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.191 "dma_device_type": 2 00:18:42.191 } 00:18:42.191 ], 00:18:42.191 "driver_specific": {} 00:18:42.191 } 00:18:42.191 ] 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.191 10:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.449 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.449 "name": "Existed_Raid", 00:18:42.449 "uuid": "8b115081-e5e2-480e-af60-7196e3d5445b", 00:18:42.449 "strip_size_kb": 0, 00:18:42.449 "state": "configuring", 00:18:42.449 "raid_level": "raid1", 00:18:42.449 "superblock": true, 00:18:42.449 "num_base_bdevs": 3, 00:18:42.449 "num_base_bdevs_discovered": 1, 00:18:42.449 "num_base_bdevs_operational": 3, 00:18:42.449 "base_bdevs_list": [ 00:18:42.449 { 00:18:42.449 "name": "BaseBdev1", 00:18:42.449 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:42.449 "is_configured": true, 00:18:42.449 "data_offset": 2048, 00:18:42.449 "data_size": 63488 00:18:42.449 }, 00:18:42.449 { 00:18:42.449 "name": "BaseBdev2", 00:18:42.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.449 "is_configured": false, 00:18:42.449 "data_offset": 0, 00:18:42.449 "data_size": 0 00:18:42.449 }, 00:18:42.449 { 00:18:42.449 "name": "BaseBdev3", 00:18:42.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.449 "is_configured": false, 00:18:42.449 "data_offset": 0, 00:18:42.449 "data_size": 0 00:18:42.449 } 00:18:42.449 ] 00:18:42.449 }' 00:18:42.449 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.449 10:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.038 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:43.038 [2024-07-26 10:28:55.886742] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.038 [2024-07-26 10:28:55.886781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb374a0 name Existed_Raid, state configuring 00:18:43.038 10:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:43.309 [2024-07-26 10:28:56.115381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.309 [2024-07-26 10:28:56.116724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.309 [2024-07-26 10:28:56.116758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.309 [2024-07-26 10:28:56.116767] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:43.309 [2024-07-26 10:28:56.116778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.309 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.569 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.569 "name": "Existed_Raid", 00:18:43.569 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:43.569 "strip_size_kb": 0, 00:18:43.569 "state": "configuring", 00:18:43.569 "raid_level": "raid1", 00:18:43.569 "superblock": true, 00:18:43.569 "num_base_bdevs": 3, 00:18:43.569 "num_base_bdevs_discovered": 1, 00:18:43.569 "num_base_bdevs_operational": 3, 00:18:43.569 "base_bdevs_list": [ 00:18:43.569 { 00:18:43.569 "name": "BaseBdev1", 00:18:43.569 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:43.569 "is_configured": true, 00:18:43.569 "data_offset": 2048, 00:18:43.569 "data_size": 63488 00:18:43.569 }, 00:18:43.569 { 00:18:43.569 "name": "BaseBdev2", 00:18:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.569 "is_configured": false, 00:18:43.569 "data_offset": 0, 00:18:43.569 "data_size": 0 00:18:43.569 }, 00:18:43.569 { 00:18:43.569 "name": "BaseBdev3", 00:18:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.569 "is_configured": false, 00:18:43.569 "data_offset": 0, 00:18:43.569 "data_size": 0 00:18:43.569 } 00:18:43.569 ] 00:18:43.569 }' 00:18:43.569 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.569 10:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.137 10:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:44.396 [2024-07-26 10:28:57.085016] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.396 BaseBdev2 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.396 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.656 [ 00:18:44.656 { 00:18:44.656 "name": "BaseBdev2", 00:18:44.656 "aliases": [ 00:18:44.656 "1e1ac5c2-5769-417b-a8a9-35d8f35722a7" 00:18:44.656 ], 00:18:44.656 "product_name": "Malloc disk", 00:18:44.656 "block_size": 512, 00:18:44.656 "num_blocks": 65536, 00:18:44.656 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:44.656 "assigned_rate_limits": { 00:18:44.656 "rw_ios_per_sec": 0, 00:18:44.656 "rw_mbytes_per_sec": 0, 00:18:44.656 "r_mbytes_per_sec": 0, 00:18:44.656 "w_mbytes_per_sec": 0 00:18:44.656 }, 00:18:44.656 "claimed": true, 00:18:44.656 "claim_type": "exclusive_write", 00:18:44.656 "zoned": false, 00:18:44.656 "supported_io_types": { 00:18:44.656 "read": true, 00:18:44.656 "write": true, 00:18:44.656 "unmap": true, 00:18:44.656 "flush": true, 00:18:44.656 "reset": true, 00:18:44.656 "nvme_admin": false, 00:18:44.656 "nvme_io": false, 00:18:44.656 "nvme_io_md": false, 00:18:44.656 "write_zeroes": true, 00:18:44.656 "zcopy": true, 00:18:44.656 "get_zone_info": false, 00:18:44.656 "zone_management": false, 00:18:44.656 "zone_append": false, 00:18:44.656 "compare": false, 00:18:44.656 "compare_and_write": false, 00:18:44.656 "abort": true, 00:18:44.656 "seek_hole": false, 00:18:44.656 "seek_data": false, 00:18:44.656 "copy": true, 00:18:44.656 "nvme_iov_md": false 00:18:44.656 }, 00:18:44.656 "memory_domains": [ 00:18:44.656 { 00:18:44.656 "dma_device_id": "system", 00:18:44.656 "dma_device_type": 1 00:18:44.656 }, 00:18:44.657 { 00:18:44.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.657 "dma_device_type": 2 00:18:44.657 } 00:18:44.657 ], 00:18:44.657 "driver_specific": {} 00:18:44.657 } 00:18:44.657 ] 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.657 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.916 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.916 "name": "Existed_Raid", 00:18:44.916 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:44.916 "strip_size_kb": 0, 00:18:44.916 "state": "configuring", 00:18:44.916 "raid_level": "raid1", 00:18:44.916 "superblock": true, 00:18:44.916 "num_base_bdevs": 3, 00:18:44.916 "num_base_bdevs_discovered": 2, 00:18:44.916 "num_base_bdevs_operational": 3, 00:18:44.916 "base_bdevs_list": [ 00:18:44.916 { 00:18:44.916 "name": "BaseBdev1", 00:18:44.916 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:44.916 "is_configured": true, 00:18:44.916 "data_offset": 2048, 00:18:44.916 "data_size": 63488 00:18:44.916 }, 00:18:44.916 { 00:18:44.916 "name": "BaseBdev2", 00:18:44.916 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:44.916 "is_configured": true, 00:18:44.916 "data_offset": 2048, 00:18:44.916 "data_size": 63488 00:18:44.916 }, 00:18:44.916 { 00:18:44.916 "name": "BaseBdev3", 00:18:44.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.916 "is_configured": false, 00:18:44.916 "data_offset": 0, 00:18:44.916 "data_size": 0 00:18:44.916 } 00:18:44.916 ] 00:18:44.916 }' 00:18:44.916 10:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.916 10:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.484 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:45.744 [2024-07-26 10:28:58.479840] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.744 [2024-07-26 10:28:58.479991] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xcea2d0 00:18:45.744 [2024-07-26 10:28:58.480003] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.744 [2024-07-26 10:28:58.480172] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb37a90 00:18:45.744 [2024-07-26 10:28:58.480295] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcea2d0 00:18:45.744 [2024-07-26 10:28:58.480315] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcea2d0 00:18:45.744 [2024-07-26 10:28:58.480404] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.744 BaseBdev3 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:45.744 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.004 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:46.263 [ 00:18:46.263 { 00:18:46.263 "name": "BaseBdev3", 00:18:46.263 "aliases": [ 00:18:46.263 "7f1b107f-9c9e-41b0-8ef4-0a26a0977024" 00:18:46.263 ], 00:18:46.263 "product_name": "Malloc disk", 00:18:46.263 "block_size": 512, 00:18:46.263 "num_blocks": 65536, 00:18:46.263 "uuid": "7f1b107f-9c9e-41b0-8ef4-0a26a0977024", 00:18:46.263 "assigned_rate_limits": { 00:18:46.263 "rw_ios_per_sec": 0, 00:18:46.263 "rw_mbytes_per_sec": 0, 00:18:46.263 "r_mbytes_per_sec": 0, 00:18:46.263 "w_mbytes_per_sec": 0 00:18:46.263 }, 00:18:46.263 "claimed": true, 00:18:46.263 "claim_type": "exclusive_write", 00:18:46.263 "zoned": false, 00:18:46.263 "supported_io_types": { 00:18:46.263 "read": true, 00:18:46.263 "write": true, 00:18:46.263 "unmap": true, 00:18:46.263 "flush": true, 00:18:46.263 "reset": true, 00:18:46.263 "nvme_admin": false, 00:18:46.263 "nvme_io": false, 00:18:46.263 "nvme_io_md": false, 00:18:46.263 "write_zeroes": true, 00:18:46.263 "zcopy": true, 00:18:46.263 "get_zone_info": false, 00:18:46.263 "zone_management": false, 00:18:46.263 "zone_append": false, 00:18:46.263 "compare": false, 00:18:46.263 "compare_and_write": false, 00:18:46.263 "abort": true, 00:18:46.263 "seek_hole": false, 00:18:46.263 "seek_data": false, 00:18:46.263 "copy": true, 00:18:46.263 "nvme_iov_md": false 00:18:46.263 }, 00:18:46.263 "memory_domains": [ 00:18:46.263 { 00:18:46.263 "dma_device_id": "system", 00:18:46.263 "dma_device_type": 1 00:18:46.263 }, 00:18:46.263 { 00:18:46.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.263 "dma_device_type": 2 00:18:46.263 } 00:18:46.263 ], 00:18:46.263 "driver_specific": {} 00:18:46.263 } 00:18:46.263 ] 00:18:46.263 10:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:46.263 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:46.263 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:46.263 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.264 10:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.523 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.523 "name": "Existed_Raid", 00:18:46.523 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:46.523 "strip_size_kb": 0, 00:18:46.523 "state": "online", 00:18:46.523 "raid_level": "raid1", 00:18:46.523 "superblock": true, 00:18:46.523 "num_base_bdevs": 3, 00:18:46.523 "num_base_bdevs_discovered": 3, 00:18:46.523 "num_base_bdevs_operational": 3, 00:18:46.523 "base_bdevs_list": [ 00:18:46.523 { 00:18:46.523 "name": "BaseBdev1", 00:18:46.523 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:46.523 "is_configured": true, 00:18:46.523 "data_offset": 2048, 00:18:46.523 "data_size": 63488 00:18:46.523 }, 00:18:46.523 { 00:18:46.523 "name": "BaseBdev2", 00:18:46.523 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:46.523 "is_configured": true, 00:18:46.523 "data_offset": 2048, 00:18:46.523 "data_size": 63488 00:18:46.523 }, 00:18:46.523 { 00:18:46.523 "name": "BaseBdev3", 00:18:46.523 "uuid": "7f1b107f-9c9e-41b0-8ef4-0a26a0977024", 00:18:46.523 "is_configured": true, 00:18:46.523 "data_offset": 2048, 00:18:46.523 "data_size": 63488 00:18:46.523 } 00:18:46.523 ] 00:18:46.523 }' 00:18:46.523 10:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.523 10:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:47.461 [2024-07-26 10:29:00.268858] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:47.461 "name": "Existed_Raid", 00:18:47.461 "aliases": [ 00:18:47.461 "205abdad-61d4-408a-8006-19a6ebb4de95" 00:18:47.461 ], 00:18:47.461 "product_name": "Raid Volume", 00:18:47.461 "block_size": 512, 00:18:47.461 "num_blocks": 63488, 00:18:47.461 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:47.461 "assigned_rate_limits": { 00:18:47.461 "rw_ios_per_sec": 0, 00:18:47.461 "rw_mbytes_per_sec": 0, 00:18:47.461 "r_mbytes_per_sec": 0, 00:18:47.461 "w_mbytes_per_sec": 0 00:18:47.461 }, 00:18:47.461 "claimed": false, 00:18:47.461 "zoned": false, 00:18:47.461 "supported_io_types": { 00:18:47.461 "read": true, 00:18:47.461 "write": true, 00:18:47.461 "unmap": false, 00:18:47.461 "flush": false, 00:18:47.461 "reset": true, 00:18:47.461 "nvme_admin": false, 00:18:47.461 "nvme_io": false, 00:18:47.461 "nvme_io_md": false, 00:18:47.461 "write_zeroes": true, 00:18:47.461 "zcopy": false, 00:18:47.461 "get_zone_info": false, 00:18:47.461 "zone_management": false, 00:18:47.461 "zone_append": false, 00:18:47.461 "compare": false, 00:18:47.461 "compare_and_write": false, 00:18:47.461 "abort": false, 00:18:47.461 "seek_hole": false, 00:18:47.461 "seek_data": false, 00:18:47.461 "copy": false, 00:18:47.461 "nvme_iov_md": false 00:18:47.461 }, 00:18:47.461 "memory_domains": [ 00:18:47.461 { 00:18:47.461 "dma_device_id": "system", 00:18:47.461 "dma_device_type": 1 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.461 "dma_device_type": 2 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "dma_device_id": "system", 00:18:47.461 "dma_device_type": 1 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.461 "dma_device_type": 2 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "dma_device_id": "system", 00:18:47.461 "dma_device_type": 1 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.461 "dma_device_type": 2 00:18:47.461 } 00:18:47.461 ], 00:18:47.461 "driver_specific": { 00:18:47.461 "raid": { 00:18:47.461 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:47.461 "strip_size_kb": 0, 00:18:47.461 "state": "online", 00:18:47.461 "raid_level": "raid1", 00:18:47.461 "superblock": true, 00:18:47.461 "num_base_bdevs": 3, 00:18:47.461 "num_base_bdevs_discovered": 3, 00:18:47.461 "num_base_bdevs_operational": 3, 00:18:47.461 "base_bdevs_list": [ 00:18:47.461 { 00:18:47.461 "name": "BaseBdev1", 00:18:47.461 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:47.461 "is_configured": true, 00:18:47.461 "data_offset": 2048, 00:18:47.461 "data_size": 63488 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "name": "BaseBdev2", 00:18:47.461 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:47.461 "is_configured": true, 00:18:47.461 "data_offset": 2048, 00:18:47.461 "data_size": 63488 00:18:47.461 }, 00:18:47.461 { 00:18:47.461 "name": "BaseBdev3", 00:18:47.461 "uuid": "7f1b107f-9c9e-41b0-8ef4-0a26a0977024", 00:18:47.461 "is_configured": true, 00:18:47.461 "data_offset": 2048, 00:18:47.461 "data_size": 63488 00:18:47.461 } 00:18:47.461 ] 00:18:47.461 } 00:18:47.461 } 00:18:47.461 }' 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:47.461 BaseBdev2 00:18:47.461 BaseBdev3' 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:47.461 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:47.721 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:47.721 "name": "BaseBdev1", 00:18:47.721 "aliases": [ 00:18:47.721 "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a" 00:18:47.721 ], 00:18:47.721 "product_name": "Malloc disk", 00:18:47.721 "block_size": 512, 00:18:47.721 "num_blocks": 65536, 00:18:47.721 "uuid": "4e5c75ea-5c6e-4b86-a4d5-cbe6b398ef7a", 00:18:47.721 "assigned_rate_limits": { 00:18:47.721 "rw_ios_per_sec": 0, 00:18:47.721 "rw_mbytes_per_sec": 0, 00:18:47.721 "r_mbytes_per_sec": 0, 00:18:47.721 "w_mbytes_per_sec": 0 00:18:47.721 }, 00:18:47.721 "claimed": true, 00:18:47.721 "claim_type": "exclusive_write", 00:18:47.721 "zoned": false, 00:18:47.721 "supported_io_types": { 00:18:47.721 "read": true, 00:18:47.721 "write": true, 00:18:47.721 "unmap": true, 00:18:47.721 "flush": true, 00:18:47.721 "reset": true, 00:18:47.721 "nvme_admin": false, 00:18:47.721 "nvme_io": false, 00:18:47.721 "nvme_io_md": false, 00:18:47.721 "write_zeroes": true, 00:18:47.721 "zcopy": true, 00:18:47.721 "get_zone_info": false, 00:18:47.721 "zone_management": false, 00:18:47.721 "zone_append": false, 00:18:47.721 "compare": false, 00:18:47.721 "compare_and_write": false, 00:18:47.721 "abort": true, 00:18:47.721 "seek_hole": false, 00:18:47.721 "seek_data": false, 00:18:47.721 "copy": true, 00:18:47.721 "nvme_iov_md": false 00:18:47.721 }, 00:18:47.721 "memory_domains": [ 00:18:47.721 { 00:18:47.721 "dma_device_id": "system", 00:18:47.721 "dma_device_type": 1 00:18:47.721 }, 00:18:47.721 { 00:18:47.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.721 "dma_device_type": 2 00:18:47.721 } 00:18:47.721 ], 00:18:47.721 "driver_specific": {} 00:18:47.721 }' 00:18:47.721 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:47.721 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:47.721 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:47.721 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:47.980 10:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:48.240 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:48.240 "name": "BaseBdev2", 00:18:48.240 "aliases": [ 00:18:48.240 "1e1ac5c2-5769-417b-a8a9-35d8f35722a7" 00:18:48.240 ], 00:18:48.240 "product_name": "Malloc disk", 00:18:48.240 "block_size": 512, 00:18:48.240 "num_blocks": 65536, 00:18:48.240 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:48.240 "assigned_rate_limits": { 00:18:48.240 "rw_ios_per_sec": 0, 00:18:48.240 "rw_mbytes_per_sec": 0, 00:18:48.240 "r_mbytes_per_sec": 0, 00:18:48.240 "w_mbytes_per_sec": 0 00:18:48.240 }, 00:18:48.240 "claimed": true, 00:18:48.240 "claim_type": "exclusive_write", 00:18:48.240 "zoned": false, 00:18:48.240 "supported_io_types": { 00:18:48.240 "read": true, 00:18:48.240 "write": true, 00:18:48.240 "unmap": true, 00:18:48.240 "flush": true, 00:18:48.240 "reset": true, 00:18:48.240 "nvme_admin": false, 00:18:48.240 "nvme_io": false, 00:18:48.240 "nvme_io_md": false, 00:18:48.240 "write_zeroes": true, 00:18:48.240 "zcopy": true, 00:18:48.240 "get_zone_info": false, 00:18:48.240 "zone_management": false, 00:18:48.240 "zone_append": false, 00:18:48.240 "compare": false, 00:18:48.240 "compare_and_write": false, 00:18:48.240 "abort": true, 00:18:48.240 "seek_hole": false, 00:18:48.240 "seek_data": false, 00:18:48.240 "copy": true, 00:18:48.240 "nvme_iov_md": false 00:18:48.240 }, 00:18:48.240 "memory_domains": [ 00:18:48.240 { 00:18:48.240 "dma_device_id": "system", 00:18:48.240 "dma_device_type": 1 00:18:48.240 }, 00:18:48.240 { 00:18:48.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.240 "dma_device_type": 2 00:18:48.240 } 00:18:48.240 ], 00:18:48.240 "driver_specific": {} 00:18:48.240 }' 00:18:48.240 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:48.499 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.758 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.758 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:48.758 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:48.758 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:48.758 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:49.018 "name": "BaseBdev3", 00:18:49.018 "aliases": [ 00:18:49.018 "7f1b107f-9c9e-41b0-8ef4-0a26a0977024" 00:18:49.018 ], 00:18:49.018 "product_name": "Malloc disk", 00:18:49.018 "block_size": 512, 00:18:49.018 "num_blocks": 65536, 00:18:49.018 "uuid": "7f1b107f-9c9e-41b0-8ef4-0a26a0977024", 00:18:49.018 "assigned_rate_limits": { 00:18:49.018 "rw_ios_per_sec": 0, 00:18:49.018 "rw_mbytes_per_sec": 0, 00:18:49.018 "r_mbytes_per_sec": 0, 00:18:49.018 "w_mbytes_per_sec": 0 00:18:49.018 }, 00:18:49.018 "claimed": true, 00:18:49.018 "claim_type": "exclusive_write", 00:18:49.018 "zoned": false, 00:18:49.018 "supported_io_types": { 00:18:49.018 "read": true, 00:18:49.018 "write": true, 00:18:49.018 "unmap": true, 00:18:49.018 "flush": true, 00:18:49.018 "reset": true, 00:18:49.018 "nvme_admin": false, 00:18:49.018 "nvme_io": false, 00:18:49.018 "nvme_io_md": false, 00:18:49.018 "write_zeroes": true, 00:18:49.018 "zcopy": true, 00:18:49.018 "get_zone_info": false, 00:18:49.018 "zone_management": false, 00:18:49.018 "zone_append": false, 00:18:49.018 "compare": false, 00:18:49.018 "compare_and_write": false, 00:18:49.018 "abort": true, 00:18:49.018 "seek_hole": false, 00:18:49.018 "seek_data": false, 00:18:49.018 "copy": true, 00:18:49.018 "nvme_iov_md": false 00:18:49.018 }, 00:18:49.018 "memory_domains": [ 00:18:49.018 { 00:18:49.018 "dma_device_id": "system", 00:18:49.018 "dma_device_type": 1 00:18:49.018 }, 00:18:49.018 { 00:18:49.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.018 "dma_device_type": 2 00:18:49.018 } 00:18:49.018 ], 00:18:49.018 "driver_specific": {} 00:18:49.018 }' 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.018 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.277 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:49.277 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.277 10:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.277 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:49.277 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.537 [2024-07-26 10:29:02.221768] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.537 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.796 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.796 "name": "Existed_Raid", 00:18:49.796 "uuid": "205abdad-61d4-408a-8006-19a6ebb4de95", 00:18:49.796 "strip_size_kb": 0, 00:18:49.796 "state": "online", 00:18:49.796 "raid_level": "raid1", 00:18:49.796 "superblock": true, 00:18:49.796 "num_base_bdevs": 3, 00:18:49.796 "num_base_bdevs_discovered": 2, 00:18:49.796 "num_base_bdevs_operational": 2, 00:18:49.796 "base_bdevs_list": [ 00:18:49.796 { 00:18:49.796 "name": null, 00:18:49.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.796 "is_configured": false, 00:18:49.796 "data_offset": 2048, 00:18:49.796 "data_size": 63488 00:18:49.796 }, 00:18:49.796 { 00:18:49.796 "name": "BaseBdev2", 00:18:49.796 "uuid": "1e1ac5c2-5769-417b-a8a9-35d8f35722a7", 00:18:49.796 "is_configured": true, 00:18:49.796 "data_offset": 2048, 00:18:49.796 "data_size": 63488 00:18:49.796 }, 00:18:49.796 { 00:18:49.796 "name": "BaseBdev3", 00:18:49.796 "uuid": "7f1b107f-9c9e-41b0-8ef4-0a26a0977024", 00:18:49.796 "is_configured": true, 00:18:49.796 "data_offset": 2048, 00:18:49.796 "data_size": 63488 00:18:49.796 } 00:18:49.796 ] 00:18:49.796 }' 00:18:49.796 10:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.796 10:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.364 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:50.364 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:50.364 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.364 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:50.623 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:50.623 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.623 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:50.623 [2024-07-26 10:29:03.502228] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.882 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:51.141 [2024-07-26 10:29:03.953420] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:51.141 [2024-07-26 10:29:03.953508] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.141 [2024-07-26 10:29:03.963909] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.141 [2024-07-26 10:29:03.963943] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.141 [2024-07-26 10:29:03.963953] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcea2d0 name Existed_Raid, state offline 00:18:51.141 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:51.141 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:51.141 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.141 10:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:51.710 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.969 BaseBdev2 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:51.969 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.229 10:29:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:52.229 [ 00:18:52.229 { 00:18:52.229 "name": "BaseBdev2", 00:18:52.229 "aliases": [ 00:18:52.229 "90704feb-348d-48ff-af38-f620e09e107c" 00:18:52.229 ], 00:18:52.229 "product_name": "Malloc disk", 00:18:52.229 "block_size": 512, 00:18:52.229 "num_blocks": 65536, 00:18:52.229 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:52.229 "assigned_rate_limits": { 00:18:52.229 "rw_ios_per_sec": 0, 00:18:52.229 "rw_mbytes_per_sec": 0, 00:18:52.229 "r_mbytes_per_sec": 0, 00:18:52.229 "w_mbytes_per_sec": 0 00:18:52.229 }, 00:18:52.229 "claimed": false, 00:18:52.229 "zoned": false, 00:18:52.229 "supported_io_types": { 00:18:52.229 "read": true, 00:18:52.229 "write": true, 00:18:52.229 "unmap": true, 00:18:52.229 "flush": true, 00:18:52.229 "reset": true, 00:18:52.229 "nvme_admin": false, 00:18:52.229 "nvme_io": false, 00:18:52.229 "nvme_io_md": false, 00:18:52.229 "write_zeroes": true, 00:18:52.229 "zcopy": true, 00:18:52.229 "get_zone_info": false, 00:18:52.229 "zone_management": false, 00:18:52.229 "zone_append": false, 00:18:52.229 "compare": false, 00:18:52.229 "compare_and_write": false, 00:18:52.229 "abort": true, 00:18:52.229 "seek_hole": false, 00:18:52.229 "seek_data": false, 00:18:52.229 "copy": true, 00:18:52.229 "nvme_iov_md": false 00:18:52.229 }, 00:18:52.229 "memory_domains": [ 00:18:52.229 { 00:18:52.229 "dma_device_id": "system", 00:18:52.229 "dma_device_type": 1 00:18:52.229 }, 00:18:52.229 { 00:18:52.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.229 "dma_device_type": 2 00:18:52.229 } 00:18:52.229 ], 00:18:52.229 "driver_specific": {} 00:18:52.229 } 00:18:52.229 ] 00:18:52.229 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:52.229 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:52.229 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:52.229 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.489 BaseBdev3 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:52.489 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.748 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.748 [ 00:18:52.748 { 00:18:52.748 "name": "BaseBdev3", 00:18:52.748 "aliases": [ 00:18:52.748 "df87ce85-9a4d-428d-975b-bf22bd6acda0" 00:18:52.748 ], 00:18:52.748 "product_name": "Malloc disk", 00:18:52.748 "block_size": 512, 00:18:52.748 "num_blocks": 65536, 00:18:52.748 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:52.748 "assigned_rate_limits": { 00:18:52.748 "rw_ios_per_sec": 0, 00:18:52.748 "rw_mbytes_per_sec": 0, 00:18:52.748 "r_mbytes_per_sec": 0, 00:18:52.748 "w_mbytes_per_sec": 0 00:18:52.748 }, 00:18:52.748 "claimed": false, 00:18:52.748 "zoned": false, 00:18:52.748 "supported_io_types": { 00:18:52.748 "read": true, 00:18:52.748 "write": true, 00:18:52.748 "unmap": true, 00:18:52.748 "flush": true, 00:18:52.748 "reset": true, 00:18:52.748 "nvme_admin": false, 00:18:52.748 "nvme_io": false, 00:18:52.748 "nvme_io_md": false, 00:18:52.748 "write_zeroes": true, 00:18:52.748 "zcopy": true, 00:18:52.748 "get_zone_info": false, 00:18:52.748 "zone_management": false, 00:18:52.748 "zone_append": false, 00:18:52.748 "compare": false, 00:18:52.748 "compare_and_write": false, 00:18:52.748 "abort": true, 00:18:52.748 "seek_hole": false, 00:18:52.748 "seek_data": false, 00:18:52.748 "copy": true, 00:18:52.748 "nvme_iov_md": false 00:18:52.748 }, 00:18:52.748 "memory_domains": [ 00:18:52.748 { 00:18:52.748 "dma_device_id": "system", 00:18:52.748 "dma_device_type": 1 00:18:52.748 }, 00:18:52.748 { 00:18:52.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.748 "dma_device_type": 2 00:18:52.748 } 00:18:52.748 ], 00:18:52.748 "driver_specific": {} 00:18:52.748 } 00:18:52.748 ] 00:18:52.748 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:52.748 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:52.748 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:52.748 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:53.008 [2024-07-26 10:29:05.700735] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.008 [2024-07-26 10:29:05.700772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.008 [2024-07-26 10:29:05.700790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.008 [2024-07-26 10:29:05.701951] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.008 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.268 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.268 "name": "Existed_Raid", 00:18:53.268 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:18:53.268 "strip_size_kb": 0, 00:18:53.268 "state": "configuring", 00:18:53.268 "raid_level": "raid1", 00:18:53.268 "superblock": true, 00:18:53.268 "num_base_bdevs": 3, 00:18:53.268 "num_base_bdevs_discovered": 2, 00:18:53.268 "num_base_bdevs_operational": 3, 00:18:53.268 "base_bdevs_list": [ 00:18:53.268 { 00:18:53.268 "name": "BaseBdev1", 00:18:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.268 "is_configured": false, 00:18:53.268 "data_offset": 0, 00:18:53.268 "data_size": 0 00:18:53.268 }, 00:18:53.268 { 00:18:53.268 "name": "BaseBdev2", 00:18:53.268 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:53.268 "is_configured": true, 00:18:53.268 "data_offset": 2048, 00:18:53.268 "data_size": 63488 00:18:53.268 }, 00:18:53.268 { 00:18:53.268 "name": "BaseBdev3", 00:18:53.268 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:53.268 "is_configured": true, 00:18:53.268 "data_offset": 2048, 00:18:53.268 "data_size": 63488 00:18:53.268 } 00:18:53.268 ] 00:18:53.268 }' 00:18:53.268 10:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.268 10:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:53.836 [2024-07-26 10:29:06.691321] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.836 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.096 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.096 "name": "Existed_Raid", 00:18:54.096 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:18:54.096 "strip_size_kb": 0, 00:18:54.096 "state": "configuring", 00:18:54.096 "raid_level": "raid1", 00:18:54.096 "superblock": true, 00:18:54.096 "num_base_bdevs": 3, 00:18:54.096 "num_base_bdevs_discovered": 1, 00:18:54.096 "num_base_bdevs_operational": 3, 00:18:54.096 "base_bdevs_list": [ 00:18:54.096 { 00:18:54.096 "name": "BaseBdev1", 00:18:54.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.096 "is_configured": false, 00:18:54.096 "data_offset": 0, 00:18:54.096 "data_size": 0 00:18:54.096 }, 00:18:54.096 { 00:18:54.096 "name": null, 00:18:54.096 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:54.096 "is_configured": false, 00:18:54.096 "data_offset": 2048, 00:18:54.096 "data_size": 63488 00:18:54.096 }, 00:18:54.096 { 00:18:54.096 "name": "BaseBdev3", 00:18:54.096 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:54.096 "is_configured": true, 00:18:54.096 "data_offset": 2048, 00:18:54.096 "data_size": 63488 00:18:54.096 } 00:18:54.096 ] 00:18:54.096 }' 00:18:54.096 10:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.096 10:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.671 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.671 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:54.929 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:54.929 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:55.188 [2024-07-26 10:29:07.929906] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.188 BaseBdev1 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:55.188 10:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.756 10:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:55.756 [ 00:18:55.756 { 00:18:55.756 "name": "BaseBdev1", 00:18:55.756 "aliases": [ 00:18:55.756 "ca214223-736b-44a2-a125-27c71792e15a" 00:18:55.756 ], 00:18:55.756 "product_name": "Malloc disk", 00:18:55.756 "block_size": 512, 00:18:55.756 "num_blocks": 65536, 00:18:55.756 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:18:55.756 "assigned_rate_limits": { 00:18:55.756 "rw_ios_per_sec": 0, 00:18:55.756 "rw_mbytes_per_sec": 0, 00:18:55.756 "r_mbytes_per_sec": 0, 00:18:55.756 "w_mbytes_per_sec": 0 00:18:55.756 }, 00:18:55.756 "claimed": true, 00:18:55.756 "claim_type": "exclusive_write", 00:18:55.756 "zoned": false, 00:18:55.756 "supported_io_types": { 00:18:55.756 "read": true, 00:18:55.756 "write": true, 00:18:55.756 "unmap": true, 00:18:55.756 "flush": true, 00:18:55.756 "reset": true, 00:18:55.756 "nvme_admin": false, 00:18:55.756 "nvme_io": false, 00:18:55.756 "nvme_io_md": false, 00:18:55.756 "write_zeroes": true, 00:18:55.756 "zcopy": true, 00:18:55.756 "get_zone_info": false, 00:18:55.756 "zone_management": false, 00:18:55.756 "zone_append": false, 00:18:55.756 "compare": false, 00:18:55.756 "compare_and_write": false, 00:18:55.756 "abort": true, 00:18:55.756 "seek_hole": false, 00:18:55.756 "seek_data": false, 00:18:55.756 "copy": true, 00:18:55.756 "nvme_iov_md": false 00:18:55.756 }, 00:18:55.756 "memory_domains": [ 00:18:55.756 { 00:18:55.756 "dma_device_id": "system", 00:18:55.756 "dma_device_type": 1 00:18:55.756 }, 00:18:55.756 { 00:18:55.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.756 "dma_device_type": 2 00:18:55.756 } 00:18:55.756 ], 00:18:55.756 "driver_specific": {} 00:18:55.756 } 00:18:55.756 ] 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.015 10:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.584 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.584 "name": "Existed_Raid", 00:18:56.584 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:18:56.584 "strip_size_kb": 0, 00:18:56.584 "state": "configuring", 00:18:56.584 "raid_level": "raid1", 00:18:56.584 "superblock": true, 00:18:56.584 "num_base_bdevs": 3, 00:18:56.584 "num_base_bdevs_discovered": 2, 00:18:56.584 "num_base_bdevs_operational": 3, 00:18:56.584 "base_bdevs_list": [ 00:18:56.584 { 00:18:56.584 "name": "BaseBdev1", 00:18:56.584 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:18:56.584 "is_configured": true, 00:18:56.584 "data_offset": 2048, 00:18:56.584 "data_size": 63488 00:18:56.584 }, 00:18:56.584 { 00:18:56.584 "name": null, 00:18:56.584 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:56.584 "is_configured": false, 00:18:56.584 "data_offset": 2048, 00:18:56.584 "data_size": 63488 00:18:56.584 }, 00:18:56.584 { 00:18:56.584 "name": "BaseBdev3", 00:18:56.584 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:56.584 "is_configured": true, 00:18:56.584 "data_offset": 2048, 00:18:56.584 "data_size": 63488 00:18:56.584 } 00:18:56.584 ] 00:18:56.584 }' 00:18:56.584 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.584 10:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.152 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.152 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:57.152 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:57.152 10:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:57.411 [2024-07-26 10:29:10.187905] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.411 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:57.411 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.411 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.411 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.412 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.670 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.670 "name": "Existed_Raid", 00:18:57.670 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:18:57.670 "strip_size_kb": 0, 00:18:57.670 "state": "configuring", 00:18:57.670 "raid_level": "raid1", 00:18:57.670 "superblock": true, 00:18:57.670 "num_base_bdevs": 3, 00:18:57.670 "num_base_bdevs_discovered": 1, 00:18:57.670 "num_base_bdevs_operational": 3, 00:18:57.670 "base_bdevs_list": [ 00:18:57.670 { 00:18:57.671 "name": "BaseBdev1", 00:18:57.671 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:18:57.671 "is_configured": true, 00:18:57.671 "data_offset": 2048, 00:18:57.671 "data_size": 63488 00:18:57.671 }, 00:18:57.671 { 00:18:57.671 "name": null, 00:18:57.671 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:57.671 "is_configured": false, 00:18:57.671 "data_offset": 2048, 00:18:57.671 "data_size": 63488 00:18:57.671 }, 00:18:57.671 { 00:18:57.671 "name": null, 00:18:57.671 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:57.671 "is_configured": false, 00:18:57.671 "data_offset": 2048, 00:18:57.671 "data_size": 63488 00:18:57.671 } 00:18:57.671 ] 00:18:57.671 }' 00:18:57.671 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.671 10:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.239 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:58.239 10:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.498 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:58.498 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:58.758 [2024-07-26 10:29:11.423188] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.758 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.017 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.017 "name": "Existed_Raid", 00:18:59.017 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:18:59.017 "strip_size_kb": 0, 00:18:59.017 "state": "configuring", 00:18:59.017 "raid_level": "raid1", 00:18:59.017 "superblock": true, 00:18:59.017 "num_base_bdevs": 3, 00:18:59.017 "num_base_bdevs_discovered": 2, 00:18:59.017 "num_base_bdevs_operational": 3, 00:18:59.017 "base_bdevs_list": [ 00:18:59.017 { 00:18:59.017 "name": "BaseBdev1", 00:18:59.017 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:18:59.017 "is_configured": true, 00:18:59.017 "data_offset": 2048, 00:18:59.017 "data_size": 63488 00:18:59.017 }, 00:18:59.017 { 00:18:59.017 "name": null, 00:18:59.017 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:18:59.017 "is_configured": false, 00:18:59.017 "data_offset": 2048, 00:18:59.017 "data_size": 63488 00:18:59.017 }, 00:18:59.017 { 00:18:59.017 "name": "BaseBdev3", 00:18:59.017 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:18:59.017 "is_configured": true, 00:18:59.017 "data_offset": 2048, 00:18:59.017 "data_size": 63488 00:18:59.017 } 00:18:59.017 ] 00:18:59.017 }' 00:18:59.017 10:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.017 10:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.586 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.586 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:59.586 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:59.586 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:59.845 [2024-07-26 10:29:12.634407] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.845 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.104 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.104 "name": "Existed_Raid", 00:19:00.104 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:19:00.104 "strip_size_kb": 0, 00:19:00.104 "state": "configuring", 00:19:00.104 "raid_level": "raid1", 00:19:00.104 "superblock": true, 00:19:00.104 "num_base_bdevs": 3, 00:19:00.104 "num_base_bdevs_discovered": 1, 00:19:00.104 "num_base_bdevs_operational": 3, 00:19:00.104 "base_bdevs_list": [ 00:19:00.104 { 00:19:00.104 "name": null, 00:19:00.104 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:00.104 "is_configured": false, 00:19:00.104 "data_offset": 2048, 00:19:00.104 "data_size": 63488 00:19:00.104 }, 00:19:00.104 { 00:19:00.104 "name": null, 00:19:00.104 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:19:00.104 "is_configured": false, 00:19:00.104 "data_offset": 2048, 00:19:00.104 "data_size": 63488 00:19:00.104 }, 00:19:00.104 { 00:19:00.104 "name": "BaseBdev3", 00:19:00.104 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:19:00.104 "is_configured": true, 00:19:00.104 "data_offset": 2048, 00:19:00.104 "data_size": 63488 00:19:00.104 } 00:19:00.104 ] 00:19:00.104 }' 00:19:00.104 10:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.104 10:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.675 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.675 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:00.935 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:00.935 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:01.194 [2024-07-26 10:29:13.867655] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.194 10:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.452 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.452 "name": "Existed_Raid", 00:19:01.452 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:19:01.452 "strip_size_kb": 0, 00:19:01.452 "state": "configuring", 00:19:01.452 "raid_level": "raid1", 00:19:01.452 "superblock": true, 00:19:01.452 "num_base_bdevs": 3, 00:19:01.452 "num_base_bdevs_discovered": 2, 00:19:01.452 "num_base_bdevs_operational": 3, 00:19:01.452 "base_bdevs_list": [ 00:19:01.452 { 00:19:01.452 "name": null, 00:19:01.452 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:01.452 "is_configured": false, 00:19:01.452 "data_offset": 2048, 00:19:01.452 "data_size": 63488 00:19:01.452 }, 00:19:01.452 { 00:19:01.452 "name": "BaseBdev2", 00:19:01.452 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:19:01.452 "is_configured": true, 00:19:01.452 "data_offset": 2048, 00:19:01.452 "data_size": 63488 00:19:01.452 }, 00:19:01.452 { 00:19:01.452 "name": "BaseBdev3", 00:19:01.452 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:19:01.452 "is_configured": true, 00:19:01.452 "data_offset": 2048, 00:19:01.452 "data_size": 63488 00:19:01.452 } 00:19:01.452 ] 00:19:01.452 }' 00:19:01.453 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.453 10:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.020 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.020 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.020 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:02.020 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.020 10:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:02.278 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ca214223-736b-44a2-a125-27c71792e15a 00:19:02.537 [2024-07-26 10:29:15.346931] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:02.537 [2024-07-26 10:29:15.347068] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xceb7c0 00:19:02.537 [2024-07-26 10:29:15.347081] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:02.537 [2024-07-26 10:29:15.347250] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb37b40 00:19:02.538 [2024-07-26 10:29:15.347361] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xceb7c0 00:19:02.538 [2024-07-26 10:29:15.347370] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xceb7c0 00:19:02.538 [2024-07-26 10:29:15.347460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.538 NewBaseBdev 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:02.538 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.797 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:03.056 [ 00:19:03.056 { 00:19:03.056 "name": "NewBaseBdev", 00:19:03.056 "aliases": [ 00:19:03.056 "ca214223-736b-44a2-a125-27c71792e15a" 00:19:03.056 ], 00:19:03.056 "product_name": "Malloc disk", 00:19:03.056 "block_size": 512, 00:19:03.056 "num_blocks": 65536, 00:19:03.056 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:03.056 "assigned_rate_limits": { 00:19:03.056 "rw_ios_per_sec": 0, 00:19:03.056 "rw_mbytes_per_sec": 0, 00:19:03.056 "r_mbytes_per_sec": 0, 00:19:03.056 "w_mbytes_per_sec": 0 00:19:03.056 }, 00:19:03.056 "claimed": true, 00:19:03.056 "claim_type": "exclusive_write", 00:19:03.056 "zoned": false, 00:19:03.056 "supported_io_types": { 00:19:03.056 "read": true, 00:19:03.056 "write": true, 00:19:03.056 "unmap": true, 00:19:03.056 "flush": true, 00:19:03.056 "reset": true, 00:19:03.056 "nvme_admin": false, 00:19:03.056 "nvme_io": false, 00:19:03.056 "nvme_io_md": false, 00:19:03.056 "write_zeroes": true, 00:19:03.056 "zcopy": true, 00:19:03.056 "get_zone_info": false, 00:19:03.056 "zone_management": false, 00:19:03.056 "zone_append": false, 00:19:03.056 "compare": false, 00:19:03.056 "compare_and_write": false, 00:19:03.056 "abort": true, 00:19:03.056 "seek_hole": false, 00:19:03.056 "seek_data": false, 00:19:03.056 "copy": true, 00:19:03.056 "nvme_iov_md": false 00:19:03.056 }, 00:19:03.056 "memory_domains": [ 00:19:03.056 { 00:19:03.056 "dma_device_id": "system", 00:19:03.056 "dma_device_type": 1 00:19:03.056 }, 00:19:03.056 { 00:19:03.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.056 "dma_device_type": 2 00:19:03.056 } 00:19:03.056 ], 00:19:03.056 "driver_specific": {} 00:19:03.056 } 00:19:03.056 ] 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.056 10:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.316 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.316 "name": "Existed_Raid", 00:19:03.316 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:19:03.316 "strip_size_kb": 0, 00:19:03.316 "state": "online", 00:19:03.316 "raid_level": "raid1", 00:19:03.316 "superblock": true, 00:19:03.316 "num_base_bdevs": 3, 00:19:03.316 "num_base_bdevs_discovered": 3, 00:19:03.316 "num_base_bdevs_operational": 3, 00:19:03.316 "base_bdevs_list": [ 00:19:03.316 { 00:19:03.316 "name": "NewBaseBdev", 00:19:03.316 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:03.316 "is_configured": true, 00:19:03.316 "data_offset": 2048, 00:19:03.316 "data_size": 63488 00:19:03.316 }, 00:19:03.316 { 00:19:03.316 "name": "BaseBdev2", 00:19:03.316 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:19:03.316 "is_configured": true, 00:19:03.316 "data_offset": 2048, 00:19:03.316 "data_size": 63488 00:19:03.316 }, 00:19:03.316 { 00:19:03.316 "name": "BaseBdev3", 00:19:03.316 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:19:03.316 "is_configured": true, 00:19:03.316 "data_offset": 2048, 00:19:03.316 "data_size": 63488 00:19:03.316 } 00:19:03.316 ] 00:19:03.316 }' 00:19:03.316 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.316 10:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:03.884 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:04.143 [2024-07-26 10:29:16.823115] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.143 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:04.143 "name": "Existed_Raid", 00:19:04.143 "aliases": [ 00:19:04.143 "276df60c-d157-4676-9295-8c0341c2bb1a" 00:19:04.143 ], 00:19:04.143 "product_name": "Raid Volume", 00:19:04.143 "block_size": 512, 00:19:04.143 "num_blocks": 63488, 00:19:04.143 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:19:04.143 "assigned_rate_limits": { 00:19:04.143 "rw_ios_per_sec": 0, 00:19:04.143 "rw_mbytes_per_sec": 0, 00:19:04.143 "r_mbytes_per_sec": 0, 00:19:04.143 "w_mbytes_per_sec": 0 00:19:04.143 }, 00:19:04.143 "claimed": false, 00:19:04.143 "zoned": false, 00:19:04.143 "supported_io_types": { 00:19:04.143 "read": true, 00:19:04.143 "write": true, 00:19:04.143 "unmap": false, 00:19:04.143 "flush": false, 00:19:04.143 "reset": true, 00:19:04.143 "nvme_admin": false, 00:19:04.143 "nvme_io": false, 00:19:04.143 "nvme_io_md": false, 00:19:04.143 "write_zeroes": true, 00:19:04.143 "zcopy": false, 00:19:04.143 "get_zone_info": false, 00:19:04.143 "zone_management": false, 00:19:04.143 "zone_append": false, 00:19:04.143 "compare": false, 00:19:04.143 "compare_and_write": false, 00:19:04.143 "abort": false, 00:19:04.143 "seek_hole": false, 00:19:04.143 "seek_data": false, 00:19:04.143 "copy": false, 00:19:04.143 "nvme_iov_md": false 00:19:04.143 }, 00:19:04.143 "memory_domains": [ 00:19:04.143 { 00:19:04.143 "dma_device_id": "system", 00:19:04.143 "dma_device_type": 1 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.143 "dma_device_type": 2 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "dma_device_id": "system", 00:19:04.143 "dma_device_type": 1 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.143 "dma_device_type": 2 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "dma_device_id": "system", 00:19:04.143 "dma_device_type": 1 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.143 "dma_device_type": 2 00:19:04.143 } 00:19:04.143 ], 00:19:04.143 "driver_specific": { 00:19:04.143 "raid": { 00:19:04.143 "uuid": "276df60c-d157-4676-9295-8c0341c2bb1a", 00:19:04.143 "strip_size_kb": 0, 00:19:04.143 "state": "online", 00:19:04.143 "raid_level": "raid1", 00:19:04.143 "superblock": true, 00:19:04.143 "num_base_bdevs": 3, 00:19:04.143 "num_base_bdevs_discovered": 3, 00:19:04.143 "num_base_bdevs_operational": 3, 00:19:04.143 "base_bdevs_list": [ 00:19:04.143 { 00:19:04.143 "name": "NewBaseBdev", 00:19:04.143 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:04.143 "is_configured": true, 00:19:04.143 "data_offset": 2048, 00:19:04.143 "data_size": 63488 00:19:04.143 }, 00:19:04.143 { 00:19:04.143 "name": "BaseBdev2", 00:19:04.143 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:19:04.143 "is_configured": true, 00:19:04.143 "data_offset": 2048, 00:19:04.144 "data_size": 63488 00:19:04.144 }, 00:19:04.144 { 00:19:04.144 "name": "BaseBdev3", 00:19:04.144 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:19:04.144 "is_configured": true, 00:19:04.144 "data_offset": 2048, 00:19:04.144 "data_size": 63488 00:19:04.144 } 00:19:04.144 ] 00:19:04.144 } 00:19:04.144 } 00:19:04.144 }' 00:19:04.144 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.144 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:04.144 BaseBdev2 00:19:04.144 BaseBdev3' 00:19:04.144 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.144 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:04.144 10:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:04.403 "name": "NewBaseBdev", 00:19:04.403 "aliases": [ 00:19:04.403 "ca214223-736b-44a2-a125-27c71792e15a" 00:19:04.403 ], 00:19:04.403 "product_name": "Malloc disk", 00:19:04.403 "block_size": 512, 00:19:04.403 "num_blocks": 65536, 00:19:04.403 "uuid": "ca214223-736b-44a2-a125-27c71792e15a", 00:19:04.403 "assigned_rate_limits": { 00:19:04.403 "rw_ios_per_sec": 0, 00:19:04.403 "rw_mbytes_per_sec": 0, 00:19:04.403 "r_mbytes_per_sec": 0, 00:19:04.403 "w_mbytes_per_sec": 0 00:19:04.403 }, 00:19:04.403 "claimed": true, 00:19:04.403 "claim_type": "exclusive_write", 00:19:04.403 "zoned": false, 00:19:04.403 "supported_io_types": { 00:19:04.403 "read": true, 00:19:04.403 "write": true, 00:19:04.403 "unmap": true, 00:19:04.403 "flush": true, 00:19:04.403 "reset": true, 00:19:04.403 "nvme_admin": false, 00:19:04.403 "nvme_io": false, 00:19:04.403 "nvme_io_md": false, 00:19:04.403 "write_zeroes": true, 00:19:04.403 "zcopy": true, 00:19:04.403 "get_zone_info": false, 00:19:04.403 "zone_management": false, 00:19:04.403 "zone_append": false, 00:19:04.403 "compare": false, 00:19:04.403 "compare_and_write": false, 00:19:04.403 "abort": true, 00:19:04.403 "seek_hole": false, 00:19:04.403 "seek_data": false, 00:19:04.403 "copy": true, 00:19:04.403 "nvme_iov_md": false 00:19:04.403 }, 00:19:04.403 "memory_domains": [ 00:19:04.403 { 00:19:04.403 "dma_device_id": "system", 00:19:04.403 "dma_device_type": 1 00:19:04.403 }, 00:19:04.403 { 00:19:04.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.403 "dma_device_type": 2 00:19:04.403 } 00:19:04.403 ], 00:19:04.403 "driver_specific": {} 00:19:04.403 }' 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.403 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:04.662 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:04.921 "name": "BaseBdev2", 00:19:04.921 "aliases": [ 00:19:04.921 "90704feb-348d-48ff-af38-f620e09e107c" 00:19:04.921 ], 00:19:04.921 "product_name": "Malloc disk", 00:19:04.921 "block_size": 512, 00:19:04.921 "num_blocks": 65536, 00:19:04.921 "uuid": "90704feb-348d-48ff-af38-f620e09e107c", 00:19:04.921 "assigned_rate_limits": { 00:19:04.921 "rw_ios_per_sec": 0, 00:19:04.921 "rw_mbytes_per_sec": 0, 00:19:04.921 "r_mbytes_per_sec": 0, 00:19:04.921 "w_mbytes_per_sec": 0 00:19:04.921 }, 00:19:04.921 "claimed": true, 00:19:04.921 "claim_type": "exclusive_write", 00:19:04.921 "zoned": false, 00:19:04.921 "supported_io_types": { 00:19:04.921 "read": true, 00:19:04.921 "write": true, 00:19:04.921 "unmap": true, 00:19:04.921 "flush": true, 00:19:04.921 "reset": true, 00:19:04.921 "nvme_admin": false, 00:19:04.921 "nvme_io": false, 00:19:04.921 "nvme_io_md": false, 00:19:04.921 "write_zeroes": true, 00:19:04.921 "zcopy": true, 00:19:04.921 "get_zone_info": false, 00:19:04.921 "zone_management": false, 00:19:04.921 "zone_append": false, 00:19:04.921 "compare": false, 00:19:04.921 "compare_and_write": false, 00:19:04.921 "abort": true, 00:19:04.921 "seek_hole": false, 00:19:04.921 "seek_data": false, 00:19:04.921 "copy": true, 00:19:04.921 "nvme_iov_md": false 00:19:04.921 }, 00:19:04.921 "memory_domains": [ 00:19:04.921 { 00:19:04.921 "dma_device_id": "system", 00:19:04.921 "dma_device_type": 1 00:19:04.921 }, 00:19:04.921 { 00:19:04.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.921 "dma_device_type": 2 00:19:04.921 } 00:19:04.921 ], 00:19:04.921 "driver_specific": {} 00:19:04.921 }' 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.921 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:05.181 10:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.440 "name": "BaseBdev3", 00:19:05.440 "aliases": [ 00:19:05.440 "df87ce85-9a4d-428d-975b-bf22bd6acda0" 00:19:05.440 ], 00:19:05.440 "product_name": "Malloc disk", 00:19:05.440 "block_size": 512, 00:19:05.440 "num_blocks": 65536, 00:19:05.440 "uuid": "df87ce85-9a4d-428d-975b-bf22bd6acda0", 00:19:05.440 "assigned_rate_limits": { 00:19:05.440 "rw_ios_per_sec": 0, 00:19:05.440 "rw_mbytes_per_sec": 0, 00:19:05.440 "r_mbytes_per_sec": 0, 00:19:05.440 "w_mbytes_per_sec": 0 00:19:05.440 }, 00:19:05.440 "claimed": true, 00:19:05.440 "claim_type": "exclusive_write", 00:19:05.440 "zoned": false, 00:19:05.440 "supported_io_types": { 00:19:05.440 "read": true, 00:19:05.440 "write": true, 00:19:05.440 "unmap": true, 00:19:05.440 "flush": true, 00:19:05.440 "reset": true, 00:19:05.440 "nvme_admin": false, 00:19:05.440 "nvme_io": false, 00:19:05.440 "nvme_io_md": false, 00:19:05.440 "write_zeroes": true, 00:19:05.440 "zcopy": true, 00:19:05.440 "get_zone_info": false, 00:19:05.440 "zone_management": false, 00:19:05.440 "zone_append": false, 00:19:05.440 "compare": false, 00:19:05.440 "compare_and_write": false, 00:19:05.440 "abort": true, 00:19:05.440 "seek_hole": false, 00:19:05.440 "seek_data": false, 00:19:05.440 "copy": true, 00:19:05.440 "nvme_iov_md": false 00:19:05.440 }, 00:19:05.440 "memory_domains": [ 00:19:05.440 { 00:19:05.440 "dma_device_id": "system", 00:19:05.440 "dma_device_type": 1 00:19:05.440 }, 00:19:05.440 { 00:19:05.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.440 "dma_device_type": 2 00:19:05.440 } 00:19:05.440 ], 00:19:05.440 "driver_specific": {} 00:19:05.440 }' 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.440 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.699 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:05.959 [2024-07-26 10:29:18.739906] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.959 [2024-07-26 10:29:18.739932] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.959 [2024-07-26 10:29:18.739980] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.959 [2024-07-26 10:29:18.740217] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.959 [2024-07-26 10:29:18.740228] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xceb7c0 name Existed_Raid, state offline 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3406377 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3406377 ']' 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3406377 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3406377 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3406377' 00:19:05.959 killing process with pid 3406377 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3406377 00:19:05.959 [2024-07-26 10:29:18.814195] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.959 10:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3406377 00:19:05.959 [2024-07-26 10:29:18.837327] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.218 10:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:06.218 00:19:06.218 real 0m27.277s 00:19:06.218 user 0m50.091s 00:19:06.218 sys 0m4.937s 00:19:06.218 10:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.218 10:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.218 ************************************ 00:19:06.218 END TEST raid_state_function_test_sb 00:19:06.218 ************************************ 00:19:06.218 10:29:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:19:06.218 10:29:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:06.218 10:29:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.218 10:29:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.218 ************************************ 00:19:06.218 START TEST raid_superblock_test 00:19:06.218 ************************************ 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:19:06.218 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3411480 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3411480 /var/tmp/spdk-raid.sock 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3411480 ']' 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:06.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.219 10:29:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.478 [2024-07-26 10:29:19.161292] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:19:06.478 [2024-07-26 10:29:19.161350] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411480 ] 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.478 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:06.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:06.479 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.479 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:06.479 [2024-07-26 10:29:19.293710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.479 [2024-07-26 10:29:19.338358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.739 [2024-07-26 10:29:19.407767] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.740 [2024-07-26 10:29:19.407801] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:07.307 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:07.566 malloc1 00:19:07.566 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.825 [2024-07-26 10:29:20.506221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.825 [2024-07-26 10:29:20.506271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.825 [2024-07-26 10:29:20.506294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b72270 00:19:07.825 [2024-07-26 10:29:20.506306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.825 [2024-07-26 10:29:20.507741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.825 [2024-07-26 10:29:20.507769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.825 pt1 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:07.826 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:08.085 malloc2 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:08.085 [2024-07-26 10:29:20.963896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:08.085 [2024-07-26 10:29:20.963942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.085 [2024-07-26 10:29:20.963960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2e2f0 00:19:08.085 [2024-07-26 10:29:20.963972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.085 [2024-07-26 10:29:20.965491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.085 [2024-07-26 10:29:20.965519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:08.085 pt2 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:08.085 10:29:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:08.343 malloc3 00:19:08.343 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:08.602 [2024-07-26 10:29:21.429573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:08.602 [2024-07-26 10:29:21.429617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.602 [2024-07-26 10:29:21.429635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af8650 00:19:08.602 [2024-07-26 10:29:21.429647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.602 [2024-07-26 10:29:21.431135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.602 [2024-07-26 10:29:21.431171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:08.602 pt3 00:19:08.602 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:08.602 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:08.602 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:08.861 [2024-07-26 10:29:21.658192] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:08.861 [2024-07-26 10:29:21.659350] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.861 [2024-07-26 10:29:21.659399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.861 [2024-07-26 10:29:21.659529] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1af9d00 00:19:08.861 [2024-07-26 10:29:21.659540] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.861 [2024-07-26 10:29:21.659719] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19c1fe0 00:19:08.861 [2024-07-26 10:29:21.659846] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1af9d00 00:19:08.861 [2024-07-26 10:29:21.659855] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1af9d00 00:19:08.861 [2024-07-26 10:29:21.659961] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.861 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.120 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.120 "name": "raid_bdev1", 00:19:09.120 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:09.120 "strip_size_kb": 0, 00:19:09.120 "state": "online", 00:19:09.120 "raid_level": "raid1", 00:19:09.120 "superblock": true, 00:19:09.120 "num_base_bdevs": 3, 00:19:09.120 "num_base_bdevs_discovered": 3, 00:19:09.121 "num_base_bdevs_operational": 3, 00:19:09.121 "base_bdevs_list": [ 00:19:09.121 { 00:19:09.121 "name": "pt1", 00:19:09.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.121 "is_configured": true, 00:19:09.121 "data_offset": 2048, 00:19:09.121 "data_size": 63488 00:19:09.121 }, 00:19:09.121 { 00:19:09.121 "name": "pt2", 00:19:09.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.121 "is_configured": true, 00:19:09.121 "data_offset": 2048, 00:19:09.121 "data_size": 63488 00:19:09.121 }, 00:19:09.121 { 00:19:09.121 "name": "pt3", 00:19:09.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.121 "is_configured": true, 00:19:09.121 "data_offset": 2048, 00:19:09.121 "data_size": 63488 00:19:09.121 } 00:19:09.121 ] 00:19:09.121 }' 00:19:09.121 10:29:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.121 10:29:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:09.688 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:09.948 [2024-07-26 10:29:22.624975] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:09.948 "name": "raid_bdev1", 00:19:09.948 "aliases": [ 00:19:09.948 "bcc08c6b-0b24-41a7-9737-c871f42d7d46" 00:19:09.948 ], 00:19:09.948 "product_name": "Raid Volume", 00:19:09.948 "block_size": 512, 00:19:09.948 "num_blocks": 63488, 00:19:09.948 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:09.948 "assigned_rate_limits": { 00:19:09.948 "rw_ios_per_sec": 0, 00:19:09.948 "rw_mbytes_per_sec": 0, 00:19:09.948 "r_mbytes_per_sec": 0, 00:19:09.948 "w_mbytes_per_sec": 0 00:19:09.948 }, 00:19:09.948 "claimed": false, 00:19:09.948 "zoned": false, 00:19:09.948 "supported_io_types": { 00:19:09.948 "read": true, 00:19:09.948 "write": true, 00:19:09.948 "unmap": false, 00:19:09.948 "flush": false, 00:19:09.948 "reset": true, 00:19:09.948 "nvme_admin": false, 00:19:09.948 "nvme_io": false, 00:19:09.948 "nvme_io_md": false, 00:19:09.948 "write_zeroes": true, 00:19:09.948 "zcopy": false, 00:19:09.948 "get_zone_info": false, 00:19:09.948 "zone_management": false, 00:19:09.948 "zone_append": false, 00:19:09.948 "compare": false, 00:19:09.948 "compare_and_write": false, 00:19:09.948 "abort": false, 00:19:09.948 "seek_hole": false, 00:19:09.948 "seek_data": false, 00:19:09.948 "copy": false, 00:19:09.948 "nvme_iov_md": false 00:19:09.948 }, 00:19:09.948 "memory_domains": [ 00:19:09.948 { 00:19:09.948 "dma_device_id": "system", 00:19:09.948 "dma_device_type": 1 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.948 "dma_device_type": 2 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "dma_device_id": "system", 00:19:09.948 "dma_device_type": 1 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.948 "dma_device_type": 2 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "dma_device_id": "system", 00:19:09.948 "dma_device_type": 1 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.948 "dma_device_type": 2 00:19:09.948 } 00:19:09.948 ], 00:19:09.948 "driver_specific": { 00:19:09.948 "raid": { 00:19:09.948 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:09.948 "strip_size_kb": 0, 00:19:09.948 "state": "online", 00:19:09.948 "raid_level": "raid1", 00:19:09.948 "superblock": true, 00:19:09.948 "num_base_bdevs": 3, 00:19:09.948 "num_base_bdevs_discovered": 3, 00:19:09.948 "num_base_bdevs_operational": 3, 00:19:09.948 "base_bdevs_list": [ 00:19:09.948 { 00:19:09.948 "name": "pt1", 00:19:09.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.948 "is_configured": true, 00:19:09.948 "data_offset": 2048, 00:19:09.948 "data_size": 63488 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "name": "pt2", 00:19:09.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.948 "is_configured": true, 00:19:09.948 "data_offset": 2048, 00:19:09.948 "data_size": 63488 00:19:09.948 }, 00:19:09.948 { 00:19:09.948 "name": "pt3", 00:19:09.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:09.948 "is_configured": true, 00:19:09.948 "data_offset": 2048, 00:19:09.948 "data_size": 63488 00:19:09.948 } 00:19:09.948 ] 00:19:09.948 } 00:19:09.948 } 00:19:09.948 }' 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:09.948 pt2 00:19:09.948 pt3' 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:09.948 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.207 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.207 "name": "pt1", 00:19:10.207 "aliases": [ 00:19:10.207 "00000000-0000-0000-0000-000000000001" 00:19:10.207 ], 00:19:10.207 "product_name": "passthru", 00:19:10.207 "block_size": 512, 00:19:10.207 "num_blocks": 65536, 00:19:10.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.207 "assigned_rate_limits": { 00:19:10.207 "rw_ios_per_sec": 0, 00:19:10.207 "rw_mbytes_per_sec": 0, 00:19:10.207 "r_mbytes_per_sec": 0, 00:19:10.207 "w_mbytes_per_sec": 0 00:19:10.207 }, 00:19:10.207 "claimed": true, 00:19:10.207 "claim_type": "exclusive_write", 00:19:10.207 "zoned": false, 00:19:10.207 "supported_io_types": { 00:19:10.207 "read": true, 00:19:10.207 "write": true, 00:19:10.207 "unmap": true, 00:19:10.207 "flush": true, 00:19:10.207 "reset": true, 00:19:10.207 "nvme_admin": false, 00:19:10.207 "nvme_io": false, 00:19:10.207 "nvme_io_md": false, 00:19:10.207 "write_zeroes": true, 00:19:10.207 "zcopy": true, 00:19:10.207 "get_zone_info": false, 00:19:10.207 "zone_management": false, 00:19:10.207 "zone_append": false, 00:19:10.207 "compare": false, 00:19:10.207 "compare_and_write": false, 00:19:10.207 "abort": true, 00:19:10.207 "seek_hole": false, 00:19:10.207 "seek_data": false, 00:19:10.207 "copy": true, 00:19:10.207 "nvme_iov_md": false 00:19:10.207 }, 00:19:10.207 "memory_domains": [ 00:19:10.207 { 00:19:10.207 "dma_device_id": "system", 00:19:10.208 "dma_device_type": 1 00:19:10.208 }, 00:19:10.208 { 00:19:10.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.208 "dma_device_type": 2 00:19:10.208 } 00:19:10.208 ], 00:19:10.208 "driver_specific": { 00:19:10.208 "passthru": { 00:19:10.208 "name": "pt1", 00:19:10.208 "base_bdev_name": "malloc1" 00:19:10.208 } 00:19:10.208 } 00:19:10.208 }' 00:19:10.208 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.208 10:29:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.208 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.208 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.208 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.208 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.208 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:10.467 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:10.725 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:10.725 "name": "pt2", 00:19:10.725 "aliases": [ 00:19:10.725 "00000000-0000-0000-0000-000000000002" 00:19:10.725 ], 00:19:10.725 "product_name": "passthru", 00:19:10.725 "block_size": 512, 00:19:10.725 "num_blocks": 65536, 00:19:10.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.725 "assigned_rate_limits": { 00:19:10.725 "rw_ios_per_sec": 0, 00:19:10.725 "rw_mbytes_per_sec": 0, 00:19:10.725 "r_mbytes_per_sec": 0, 00:19:10.725 "w_mbytes_per_sec": 0 00:19:10.725 }, 00:19:10.725 "claimed": true, 00:19:10.725 "claim_type": "exclusive_write", 00:19:10.725 "zoned": false, 00:19:10.725 "supported_io_types": { 00:19:10.725 "read": true, 00:19:10.726 "write": true, 00:19:10.726 "unmap": true, 00:19:10.726 "flush": true, 00:19:10.726 "reset": true, 00:19:10.726 "nvme_admin": false, 00:19:10.726 "nvme_io": false, 00:19:10.726 "nvme_io_md": false, 00:19:10.726 "write_zeroes": true, 00:19:10.726 "zcopy": true, 00:19:10.726 "get_zone_info": false, 00:19:10.726 "zone_management": false, 00:19:10.726 "zone_append": false, 00:19:10.726 "compare": false, 00:19:10.726 "compare_and_write": false, 00:19:10.726 "abort": true, 00:19:10.726 "seek_hole": false, 00:19:10.726 "seek_data": false, 00:19:10.726 "copy": true, 00:19:10.726 "nvme_iov_md": false 00:19:10.726 }, 00:19:10.726 "memory_domains": [ 00:19:10.726 { 00:19:10.726 "dma_device_id": "system", 00:19:10.726 "dma_device_type": 1 00:19:10.726 }, 00:19:10.726 { 00:19:10.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.726 "dma_device_type": 2 00:19:10.726 } 00:19:10.726 ], 00:19:10.726 "driver_specific": { 00:19:10.726 "passthru": { 00:19:10.726 "name": "pt2", 00:19:10.726 "base_bdev_name": "malloc2" 00:19:10.726 } 00:19:10.726 } 00:19:10.726 }' 00:19:10.726 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.726 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:10.726 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:10.726 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.726 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:10.984 10:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:11.243 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:11.243 "name": "pt3", 00:19:11.243 "aliases": [ 00:19:11.243 "00000000-0000-0000-0000-000000000003" 00:19:11.243 ], 00:19:11.243 "product_name": "passthru", 00:19:11.243 "block_size": 512, 00:19:11.243 "num_blocks": 65536, 00:19:11.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:11.243 "assigned_rate_limits": { 00:19:11.243 "rw_ios_per_sec": 0, 00:19:11.243 "rw_mbytes_per_sec": 0, 00:19:11.243 "r_mbytes_per_sec": 0, 00:19:11.243 "w_mbytes_per_sec": 0 00:19:11.243 }, 00:19:11.243 "claimed": true, 00:19:11.243 "claim_type": "exclusive_write", 00:19:11.243 "zoned": false, 00:19:11.243 "supported_io_types": { 00:19:11.243 "read": true, 00:19:11.243 "write": true, 00:19:11.243 "unmap": true, 00:19:11.243 "flush": true, 00:19:11.243 "reset": true, 00:19:11.243 "nvme_admin": false, 00:19:11.243 "nvme_io": false, 00:19:11.243 "nvme_io_md": false, 00:19:11.243 "write_zeroes": true, 00:19:11.243 "zcopy": true, 00:19:11.243 "get_zone_info": false, 00:19:11.243 "zone_management": false, 00:19:11.243 "zone_append": false, 00:19:11.243 "compare": false, 00:19:11.243 "compare_and_write": false, 00:19:11.243 "abort": true, 00:19:11.243 "seek_hole": false, 00:19:11.243 "seek_data": false, 00:19:11.243 "copy": true, 00:19:11.243 "nvme_iov_md": false 00:19:11.243 }, 00:19:11.243 "memory_domains": [ 00:19:11.243 { 00:19:11.243 "dma_device_id": "system", 00:19:11.243 "dma_device_type": 1 00:19:11.243 }, 00:19:11.243 { 00:19:11.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.243 "dma_device_type": 2 00:19:11.243 } 00:19:11.243 ], 00:19:11.243 "driver_specific": { 00:19:11.243 "passthru": { 00:19:11.243 "name": "pt3", 00:19:11.243 "base_bdev_name": "malloc3" 00:19:11.243 } 00:19:11.243 } 00:19:11.243 }' 00:19:11.243 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.243 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:11.243 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:11.243 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.501 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:11.501 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:11.501 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.501 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:11.501 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:11.502 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.502 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:11.502 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:11.502 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:11.502 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:11.760 [2024-07-26 10:29:24.590159] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.760 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=bcc08c6b-0b24-41a7-9737-c871f42d7d46 00:19:11.760 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z bcc08c6b-0b24-41a7-9737-c871f42d7d46 ']' 00:19:11.760 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:12.019 [2024-07-26 10:29:24.814496] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.019 [2024-07-26 10:29:24.814516] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.019 [2024-07-26 10:29:24.814562] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.019 [2024-07-26 10:29:24.814624] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.019 [2024-07-26 10:29:24.814635] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1af9d00 name raid_bdev1, state offline 00:19:12.019 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.019 10:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:12.278 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:12.278 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:12.278 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.278 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:12.537 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.537 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:12.795 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.795 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:13.054 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:13.054 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:13.313 10:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.313 [2024-07-26 10:29:26.178034] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:13.313 [2024-07-26 10:29:26.179306] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:13.313 [2024-07-26 10:29:26.179348] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:13.313 [2024-07-26 10:29:26.179390] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:13.313 [2024-07-26 10:29:26.179426] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:13.313 [2024-07-26 10:29:26.179447] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:13.313 [2024-07-26 10:29:26.179464] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.313 [2024-07-26 10:29:26.179472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1afa8d0 name raid_bdev1, state configuring 00:19:13.313 request: 00:19:13.313 { 00:19:13.313 "name": "raid_bdev1", 00:19:13.313 "raid_level": "raid1", 00:19:13.313 "base_bdevs": [ 00:19:13.313 "malloc1", 00:19:13.313 "malloc2", 00:19:13.313 "malloc3" 00:19:13.313 ], 00:19:13.313 "superblock": false, 00:19:13.313 "method": "bdev_raid_create", 00:19:13.313 "req_id": 1 00:19:13.313 } 00:19:13.313 Got JSON-RPC error response 00:19:13.313 response: 00:19:13.313 { 00:19:13.313 "code": -17, 00:19:13.313 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:13.313 } 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.313 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:13.571 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:13.571 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:13.571 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:13.829 [2024-07-26 10:29:26.627160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:13.829 [2024-07-26 10:29:26.627204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.829 [2024-07-26 10:29:26.627222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b725b0 00:19:13.829 [2024-07-26 10:29:26.627233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.829 [2024-07-26 10:29:26.628707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.829 [2024-07-26 10:29:26.628734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:13.829 [2024-07-26 10:29:26.628793] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:13.829 [2024-07-26 10:29:26.628815] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:13.829 pt1 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.829 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.087 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.087 "name": "raid_bdev1", 00:19:14.087 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:14.087 "strip_size_kb": 0, 00:19:14.087 "state": "configuring", 00:19:14.087 "raid_level": "raid1", 00:19:14.087 "superblock": true, 00:19:14.087 "num_base_bdevs": 3, 00:19:14.087 "num_base_bdevs_discovered": 1, 00:19:14.087 "num_base_bdevs_operational": 3, 00:19:14.087 "base_bdevs_list": [ 00:19:14.087 { 00:19:14.087 "name": "pt1", 00:19:14.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:14.087 "is_configured": true, 00:19:14.087 "data_offset": 2048, 00:19:14.087 "data_size": 63488 00:19:14.087 }, 00:19:14.087 { 00:19:14.087 "name": null, 00:19:14.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.087 "is_configured": false, 00:19:14.087 "data_offset": 2048, 00:19:14.087 "data_size": 63488 00:19:14.087 }, 00:19:14.087 { 00:19:14.087 "name": null, 00:19:14.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:14.087 "is_configured": false, 00:19:14.087 "data_offset": 2048, 00:19:14.087 "data_size": 63488 00:19:14.087 } 00:19:14.087 ] 00:19:14.087 }' 00:19:14.087 10:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.087 10:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.652 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:19:14.652 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:14.911 [2024-07-26 10:29:27.673908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:14.911 [2024-07-26 10:29:27.673955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.911 [2024-07-26 10:29:27.673975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afba20 00:19:14.911 [2024-07-26 10:29:27.673986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.911 [2024-07-26 10:29:27.674303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.911 [2024-07-26 10:29:27.674320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:14.911 [2024-07-26 10:29:27.674374] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:14.911 [2024-07-26 10:29:27.674391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:14.911 pt2 00:19:14.911 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:15.170 [2024-07-26 10:29:27.902693] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.170 10:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.429 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.429 "name": "raid_bdev1", 00:19:15.429 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:15.429 "strip_size_kb": 0, 00:19:15.429 "state": "configuring", 00:19:15.429 "raid_level": "raid1", 00:19:15.429 "superblock": true, 00:19:15.429 "num_base_bdevs": 3, 00:19:15.429 "num_base_bdevs_discovered": 1, 00:19:15.429 "num_base_bdevs_operational": 3, 00:19:15.429 "base_bdevs_list": [ 00:19:15.429 { 00:19:15.429 "name": "pt1", 00:19:15.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:15.429 "is_configured": true, 00:19:15.429 "data_offset": 2048, 00:19:15.429 "data_size": 63488 00:19:15.429 }, 00:19:15.429 { 00:19:15.429 "name": null, 00:19:15.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.429 "is_configured": false, 00:19:15.429 "data_offset": 2048, 00:19:15.429 "data_size": 63488 00:19:15.429 }, 00:19:15.429 { 00:19:15.429 "name": null, 00:19:15.429 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:15.429 "is_configured": false, 00:19:15.429 "data_offset": 2048, 00:19:15.429 "data_size": 63488 00:19:15.429 } 00:19:15.429 ] 00:19:15.429 }' 00:19:15.429 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.429 10:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.997 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:15.997 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:15.997 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.255 [2024-07-26 10:29:28.929395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.255 [2024-07-26 10:29:28.929441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.255 [2024-07-26 10:29:28.929458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c1b80 00:19:16.255 [2024-07-26 10:29:28.929470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.255 [2024-07-26 10:29:28.929773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.255 [2024-07-26 10:29:28.929789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.255 [2024-07-26 10:29:28.929844] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:16.255 [2024-07-26 10:29:28.929861] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.255 pt2 00:19:16.255 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:16.255 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:16.255 10:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:16.514 [2024-07-26 10:29:29.162007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:16.514 [2024-07-26 10:29:29.162036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.514 [2024-07-26 10:29:29.162052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c1420 00:19:16.514 [2024-07-26 10:29:29.162063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.514 [2024-07-26 10:29:29.162322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.514 [2024-07-26 10:29:29.162338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:16.514 [2024-07-26 10:29:29.162382] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:16.514 [2024-07-26 10:29:29.162398] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:16.514 [2024-07-26 10:29:29.162500] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1afdc30 00:19:16.514 [2024-07-26 10:29:29.162510] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:16.514 [2024-07-26 10:29:29.162655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19bf680 00:19:16.514 [2024-07-26 10:29:29.162770] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1afdc30 00:19:16.514 [2024-07-26 10:29:29.162779] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1afdc30 00:19:16.514 [2024-07-26 10:29:29.162862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.514 pt3 00:19:16.514 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:16.514 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.515 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.774 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.774 "name": "raid_bdev1", 00:19:16.774 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:16.774 "strip_size_kb": 0, 00:19:16.774 "state": "online", 00:19:16.774 "raid_level": "raid1", 00:19:16.774 "superblock": true, 00:19:16.774 "num_base_bdevs": 3, 00:19:16.774 "num_base_bdevs_discovered": 3, 00:19:16.774 "num_base_bdevs_operational": 3, 00:19:16.774 "base_bdevs_list": [ 00:19:16.774 { 00:19:16.774 "name": "pt1", 00:19:16.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:16.774 "is_configured": true, 00:19:16.774 "data_offset": 2048, 00:19:16.774 "data_size": 63488 00:19:16.774 }, 00:19:16.774 { 00:19:16.774 "name": "pt2", 00:19:16.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:16.774 "is_configured": true, 00:19:16.774 "data_offset": 2048, 00:19:16.774 "data_size": 63488 00:19:16.774 }, 00:19:16.774 { 00:19:16.774 "name": "pt3", 00:19:16.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:16.774 "is_configured": true, 00:19:16.774 "data_offset": 2048, 00:19:16.774 "data_size": 63488 00:19:16.774 } 00:19:16.774 ] 00:19:16.774 }' 00:19:16.774 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.774 10:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:17.343 10:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:17.343 [2024-07-26 10:29:30.188975] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.343 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:17.343 "name": "raid_bdev1", 00:19:17.343 "aliases": [ 00:19:17.343 "bcc08c6b-0b24-41a7-9737-c871f42d7d46" 00:19:17.343 ], 00:19:17.343 "product_name": "Raid Volume", 00:19:17.343 "block_size": 512, 00:19:17.343 "num_blocks": 63488, 00:19:17.343 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:17.343 "assigned_rate_limits": { 00:19:17.343 "rw_ios_per_sec": 0, 00:19:17.343 "rw_mbytes_per_sec": 0, 00:19:17.343 "r_mbytes_per_sec": 0, 00:19:17.343 "w_mbytes_per_sec": 0 00:19:17.343 }, 00:19:17.343 "claimed": false, 00:19:17.343 "zoned": false, 00:19:17.343 "supported_io_types": { 00:19:17.343 "read": true, 00:19:17.343 "write": true, 00:19:17.343 "unmap": false, 00:19:17.343 "flush": false, 00:19:17.343 "reset": true, 00:19:17.343 "nvme_admin": false, 00:19:17.343 "nvme_io": false, 00:19:17.343 "nvme_io_md": false, 00:19:17.343 "write_zeroes": true, 00:19:17.343 "zcopy": false, 00:19:17.343 "get_zone_info": false, 00:19:17.343 "zone_management": false, 00:19:17.343 "zone_append": false, 00:19:17.343 "compare": false, 00:19:17.343 "compare_and_write": false, 00:19:17.343 "abort": false, 00:19:17.343 "seek_hole": false, 00:19:17.343 "seek_data": false, 00:19:17.343 "copy": false, 00:19:17.343 "nvme_iov_md": false 00:19:17.343 }, 00:19:17.343 "memory_domains": [ 00:19:17.343 { 00:19:17.343 "dma_device_id": "system", 00:19:17.343 "dma_device_type": 1 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.343 "dma_device_type": 2 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "dma_device_id": "system", 00:19:17.343 "dma_device_type": 1 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.343 "dma_device_type": 2 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "dma_device_id": "system", 00:19:17.343 "dma_device_type": 1 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.343 "dma_device_type": 2 00:19:17.343 } 00:19:17.343 ], 00:19:17.343 "driver_specific": { 00:19:17.343 "raid": { 00:19:17.343 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:17.343 "strip_size_kb": 0, 00:19:17.343 "state": "online", 00:19:17.343 "raid_level": "raid1", 00:19:17.343 "superblock": true, 00:19:17.343 "num_base_bdevs": 3, 00:19:17.343 "num_base_bdevs_discovered": 3, 00:19:17.343 "num_base_bdevs_operational": 3, 00:19:17.343 "base_bdevs_list": [ 00:19:17.343 { 00:19:17.343 "name": "pt1", 00:19:17.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.343 "is_configured": true, 00:19:17.343 "data_offset": 2048, 00:19:17.343 "data_size": 63488 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "name": "pt2", 00:19:17.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.343 "is_configured": true, 00:19:17.343 "data_offset": 2048, 00:19:17.343 "data_size": 63488 00:19:17.343 }, 00:19:17.343 { 00:19:17.343 "name": "pt3", 00:19:17.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:17.343 "is_configured": true, 00:19:17.343 "data_offset": 2048, 00:19:17.343 "data_size": 63488 00:19:17.343 } 00:19:17.343 ] 00:19:17.343 } 00:19:17.343 } 00:19:17.343 }' 00:19:17.343 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:17.603 pt2 00:19:17.603 pt3' 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:17.603 "name": "pt1", 00:19:17.603 "aliases": [ 00:19:17.603 "00000000-0000-0000-0000-000000000001" 00:19:17.603 ], 00:19:17.603 "product_name": "passthru", 00:19:17.603 "block_size": 512, 00:19:17.603 "num_blocks": 65536, 00:19:17.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.603 "assigned_rate_limits": { 00:19:17.603 "rw_ios_per_sec": 0, 00:19:17.603 "rw_mbytes_per_sec": 0, 00:19:17.603 "r_mbytes_per_sec": 0, 00:19:17.603 "w_mbytes_per_sec": 0 00:19:17.603 }, 00:19:17.603 "claimed": true, 00:19:17.603 "claim_type": "exclusive_write", 00:19:17.603 "zoned": false, 00:19:17.603 "supported_io_types": { 00:19:17.603 "read": true, 00:19:17.603 "write": true, 00:19:17.603 "unmap": true, 00:19:17.603 "flush": true, 00:19:17.603 "reset": true, 00:19:17.603 "nvme_admin": false, 00:19:17.603 "nvme_io": false, 00:19:17.603 "nvme_io_md": false, 00:19:17.603 "write_zeroes": true, 00:19:17.603 "zcopy": true, 00:19:17.603 "get_zone_info": false, 00:19:17.603 "zone_management": false, 00:19:17.603 "zone_append": false, 00:19:17.603 "compare": false, 00:19:17.603 "compare_and_write": false, 00:19:17.603 "abort": true, 00:19:17.603 "seek_hole": false, 00:19:17.603 "seek_data": false, 00:19:17.603 "copy": true, 00:19:17.603 "nvme_iov_md": false 00:19:17.603 }, 00:19:17.603 "memory_domains": [ 00:19:17.603 { 00:19:17.603 "dma_device_id": "system", 00:19:17.603 "dma_device_type": 1 00:19:17.603 }, 00:19:17.603 { 00:19:17.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.603 "dma_device_type": 2 00:19:17.603 } 00:19:17.603 ], 00:19:17.603 "driver_specific": { 00:19:17.603 "passthru": { 00:19:17.603 "name": "pt1", 00:19:17.603 "base_bdev_name": "malloc1" 00:19:17.603 } 00:19:17.603 } 00:19:17.603 }' 00:19:17.603 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.862 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:18.122 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:18.122 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:18.122 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:18.122 10:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:18.122 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:18.123 "name": "pt2", 00:19:18.123 "aliases": [ 00:19:18.123 "00000000-0000-0000-0000-000000000002" 00:19:18.123 ], 00:19:18.123 "product_name": "passthru", 00:19:18.123 "block_size": 512, 00:19:18.123 "num_blocks": 65536, 00:19:18.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.123 "assigned_rate_limits": { 00:19:18.123 "rw_ios_per_sec": 0, 00:19:18.123 "rw_mbytes_per_sec": 0, 00:19:18.123 "r_mbytes_per_sec": 0, 00:19:18.123 "w_mbytes_per_sec": 0 00:19:18.123 }, 00:19:18.123 "claimed": true, 00:19:18.123 "claim_type": "exclusive_write", 00:19:18.123 "zoned": false, 00:19:18.123 "supported_io_types": { 00:19:18.123 "read": true, 00:19:18.123 "write": true, 00:19:18.123 "unmap": true, 00:19:18.123 "flush": true, 00:19:18.123 "reset": true, 00:19:18.123 "nvme_admin": false, 00:19:18.123 "nvme_io": false, 00:19:18.123 "nvme_io_md": false, 00:19:18.123 "write_zeroes": true, 00:19:18.123 "zcopy": true, 00:19:18.123 "get_zone_info": false, 00:19:18.123 "zone_management": false, 00:19:18.123 "zone_append": false, 00:19:18.123 "compare": false, 00:19:18.123 "compare_and_write": false, 00:19:18.123 "abort": true, 00:19:18.123 "seek_hole": false, 00:19:18.123 "seek_data": false, 00:19:18.123 "copy": true, 00:19:18.123 "nvme_iov_md": false 00:19:18.123 }, 00:19:18.123 "memory_domains": [ 00:19:18.123 { 00:19:18.123 "dma_device_id": "system", 00:19:18.123 "dma_device_type": 1 00:19:18.123 }, 00:19:18.123 { 00:19:18.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.123 "dma_device_type": 2 00:19:18.123 } 00:19:18.123 ], 00:19:18.123 "driver_specific": { 00:19:18.123 "passthru": { 00:19:18.123 "name": "pt2", 00:19:18.123 "base_bdev_name": "malloc2" 00:19:18.123 } 00:19:18.123 } 00:19:18.123 }' 00:19:18.123 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:18.400 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:18.658 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:18.658 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:18.658 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:18.658 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:18.658 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:18.917 "name": "pt3", 00:19:18.917 "aliases": [ 00:19:18.917 "00000000-0000-0000-0000-000000000003" 00:19:18.917 ], 00:19:18.917 "product_name": "passthru", 00:19:18.917 "block_size": 512, 00:19:18.917 "num_blocks": 65536, 00:19:18.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:18.917 "assigned_rate_limits": { 00:19:18.917 "rw_ios_per_sec": 0, 00:19:18.917 "rw_mbytes_per_sec": 0, 00:19:18.917 "r_mbytes_per_sec": 0, 00:19:18.917 "w_mbytes_per_sec": 0 00:19:18.917 }, 00:19:18.917 "claimed": true, 00:19:18.917 "claim_type": "exclusive_write", 00:19:18.917 "zoned": false, 00:19:18.917 "supported_io_types": { 00:19:18.917 "read": true, 00:19:18.917 "write": true, 00:19:18.917 "unmap": true, 00:19:18.917 "flush": true, 00:19:18.917 "reset": true, 00:19:18.917 "nvme_admin": false, 00:19:18.917 "nvme_io": false, 00:19:18.917 "nvme_io_md": false, 00:19:18.917 "write_zeroes": true, 00:19:18.917 "zcopy": true, 00:19:18.917 "get_zone_info": false, 00:19:18.917 "zone_management": false, 00:19:18.917 "zone_append": false, 00:19:18.917 "compare": false, 00:19:18.917 "compare_and_write": false, 00:19:18.917 "abort": true, 00:19:18.917 "seek_hole": false, 00:19:18.917 "seek_data": false, 00:19:18.917 "copy": true, 00:19:18.917 "nvme_iov_md": false 00:19:18.917 }, 00:19:18.917 "memory_domains": [ 00:19:18.917 { 00:19:18.917 "dma_device_id": "system", 00:19:18.917 "dma_device_type": 1 00:19:18.917 }, 00:19:18.917 { 00:19:18.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.917 "dma_device_type": 2 00:19:18.917 } 00:19:18.917 ], 00:19:18.917 "driver_specific": { 00:19:18.917 "passthru": { 00:19:18.917 "name": "pt3", 00:19:18.917 "base_bdev_name": "malloc3" 00:19:18.917 } 00:19:18.917 } 00:19:18.917 }' 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:18.917 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:19.176 10:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:19.435 [2024-07-26 10:29:32.122074] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.435 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' bcc08c6b-0b24-41a7-9737-c871f42d7d46 '!=' bcc08c6b-0b24-41a7-9737-c871f42d7d46 ']' 00:19:19.435 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:19:19.435 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:19.435 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:19.435 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:19.693 [2024-07-26 10:29:32.350446] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.693 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.694 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.952 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.952 "name": "raid_bdev1", 00:19:19.952 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:19.952 "strip_size_kb": 0, 00:19:19.952 "state": "online", 00:19:19.952 "raid_level": "raid1", 00:19:19.952 "superblock": true, 00:19:19.952 "num_base_bdevs": 3, 00:19:19.952 "num_base_bdevs_discovered": 2, 00:19:19.952 "num_base_bdevs_operational": 2, 00:19:19.952 "base_bdevs_list": [ 00:19:19.952 { 00:19:19.952 "name": null, 00:19:19.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.952 "is_configured": false, 00:19:19.952 "data_offset": 2048, 00:19:19.952 "data_size": 63488 00:19:19.952 }, 00:19:19.952 { 00:19:19.952 "name": "pt2", 00:19:19.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.952 "is_configured": true, 00:19:19.952 "data_offset": 2048, 00:19:19.952 "data_size": 63488 00:19:19.952 }, 00:19:19.952 { 00:19:19.952 "name": "pt3", 00:19:19.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:19.952 "is_configured": true, 00:19:19.952 "data_offset": 2048, 00:19:19.952 "data_size": 63488 00:19:19.952 } 00:19:19.952 ] 00:19:19.952 }' 00:19:19.952 10:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.953 10:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.519 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:20.519 [2024-07-26 10:29:33.357082] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.519 [2024-07-26 10:29:33.357105] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.519 [2024-07-26 10:29:33.357155] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.519 [2024-07-26 10:29:33.357202] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.519 [2024-07-26 10:29:33.357213] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1afdc30 name raid_bdev1, state offline 00:19:20.519 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.519 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:19:20.777 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:19:20.777 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:19:20.777 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:20.777 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:20.777 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:21.035 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.035 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:21.035 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:21.294 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:21.294 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:19:21.294 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:19:21.294 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:19:21.294 10:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.294 [2024-07-26 10:29:34.115045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.294 [2024-07-26 10:29:34.115089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.294 [2024-07-26 10:29:34.115106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afcc80 00:19:21.294 [2024-07-26 10:29:34.115117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.294 [2024-07-26 10:29:34.116574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.294 [2024-07-26 10:29:34.116601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.294 [2024-07-26 10:29:34.116657] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.294 [2024-07-26 10:29:34.116680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.294 pt2 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.294 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.553 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.553 "name": "raid_bdev1", 00:19:21.553 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:21.553 "strip_size_kb": 0, 00:19:21.553 "state": "configuring", 00:19:21.553 "raid_level": "raid1", 00:19:21.553 "superblock": true, 00:19:21.553 "num_base_bdevs": 3, 00:19:21.553 "num_base_bdevs_discovered": 1, 00:19:21.553 "num_base_bdevs_operational": 2, 00:19:21.553 "base_bdevs_list": [ 00:19:21.553 { 00:19:21.553 "name": null, 00:19:21.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.553 "is_configured": false, 00:19:21.553 "data_offset": 2048, 00:19:21.553 "data_size": 63488 00:19:21.553 }, 00:19:21.553 { 00:19:21.553 "name": "pt2", 00:19:21.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.553 "is_configured": true, 00:19:21.553 "data_offset": 2048, 00:19:21.553 "data_size": 63488 00:19:21.553 }, 00:19:21.553 { 00:19:21.553 "name": null, 00:19:21.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.553 "is_configured": false, 00:19:21.553 "data_offset": 2048, 00:19:21.553 "data_size": 63488 00:19:21.553 } 00:19:21.553 ] 00:19:21.553 }' 00:19:21.553 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.553 10:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.121 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:19:22.121 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:19:22.121 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:22.121 10:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:22.380 [2024-07-26 10:29:35.153801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:22.380 [2024-07-26 10:29:35.153847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.380 [2024-07-26 10:29:35.153864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afd4e0 00:19:22.380 [2024-07-26 10:29:35.153875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.380 [2024-07-26 10:29:35.154186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.380 [2024-07-26 10:29:35.154202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:22.380 [2024-07-26 10:29:35.154256] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:22.380 [2024-07-26 10:29:35.154274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:22.380 [2024-07-26 10:29:35.154361] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b0dc30 00:19:22.380 [2024-07-26 10:29:35.154370] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:22.380 [2024-07-26 10:29:35.154520] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19c1fe0 00:19:22.380 [2024-07-26 10:29:35.154635] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b0dc30 00:19:22.380 [2024-07-26 10:29:35.154644] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b0dc30 00:19:22.380 [2024-07-26 10:29:35.154733] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.380 pt3 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.380 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.381 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.381 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.381 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.381 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.640 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.640 "name": "raid_bdev1", 00:19:22.640 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:22.640 "strip_size_kb": 0, 00:19:22.640 "state": "online", 00:19:22.640 "raid_level": "raid1", 00:19:22.640 "superblock": true, 00:19:22.640 "num_base_bdevs": 3, 00:19:22.640 "num_base_bdevs_discovered": 2, 00:19:22.640 "num_base_bdevs_operational": 2, 00:19:22.640 "base_bdevs_list": [ 00:19:22.640 { 00:19:22.640 "name": null, 00:19:22.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.640 "is_configured": false, 00:19:22.640 "data_offset": 2048, 00:19:22.640 "data_size": 63488 00:19:22.640 }, 00:19:22.640 { 00:19:22.640 "name": "pt2", 00:19:22.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.640 "is_configured": true, 00:19:22.640 "data_offset": 2048, 00:19:22.640 "data_size": 63488 00:19:22.640 }, 00:19:22.640 { 00:19:22.640 "name": "pt3", 00:19:22.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.640 "is_configured": true, 00:19:22.640 "data_offset": 2048, 00:19:22.640 "data_size": 63488 00:19:22.640 } 00:19:22.640 ] 00:19:22.640 }' 00:19:22.640 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.640 10:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.208 10:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:23.467 [2024-07-26 10:29:36.188508] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.467 [2024-07-26 10:29:36.188531] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.467 [2024-07-26 10:29:36.188579] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.467 [2024-07-26 10:29:36.188632] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.467 [2024-07-26 10:29:36.188642] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b0dc30 name raid_bdev1, state offline 00:19:23.467 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:19:23.467 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:23.727 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.986 [2024-07-26 10:29:36.753963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.986 [2024-07-26 10:29:36.754001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.986 [2024-07-26 10:29:36.754017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2da00 00:19:23.986 [2024-07-26 10:29:36.754028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.986 [2024-07-26 10:29:36.755483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.986 [2024-07-26 10:29:36.755509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.986 [2024-07-26 10:29:36.755562] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:23.986 [2024-07-26 10:29:36.755584] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.986 [2024-07-26 10:29:36.755667] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:23.986 [2024-07-26 10:29:36.755679] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.986 [2024-07-26 10:29:36.755691] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b0dab0 name raid_bdev1, state configuring 00:19:23.986 [2024-07-26 10:29:36.755711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.986 pt1 00:19:23.986 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.987 10:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.246 10:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.246 "name": "raid_bdev1", 00:19:24.246 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:24.246 "strip_size_kb": 0, 00:19:24.246 "state": "configuring", 00:19:24.246 "raid_level": "raid1", 00:19:24.246 "superblock": true, 00:19:24.246 "num_base_bdevs": 3, 00:19:24.246 "num_base_bdevs_discovered": 1, 00:19:24.246 "num_base_bdevs_operational": 2, 00:19:24.246 "base_bdevs_list": [ 00:19:24.246 { 00:19:24.246 "name": null, 00:19:24.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.246 "is_configured": false, 00:19:24.246 "data_offset": 2048, 00:19:24.246 "data_size": 63488 00:19:24.246 }, 00:19:24.246 { 00:19:24.246 "name": "pt2", 00:19:24.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.246 "is_configured": true, 00:19:24.246 "data_offset": 2048, 00:19:24.246 "data_size": 63488 00:19:24.246 }, 00:19:24.246 { 00:19:24.246 "name": null, 00:19:24.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.246 "is_configured": false, 00:19:24.246 "data_offset": 2048, 00:19:24.246 "data_size": 63488 00:19:24.246 } 00:19:24.246 ] 00:19:24.246 }' 00:19:24.246 10:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.246 10:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.813 10:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:24.813 10:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:19:25.379 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:19:25.379 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:25.639 [2024-07-26 10:29:38.338283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:25.639 [2024-07-26 10:29:38.338330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.639 [2024-07-26 10:29:38.338350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c1650 00:19:25.639 [2024-07-26 10:29:38.338362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.639 [2024-07-26 10:29:38.338670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.639 [2024-07-26 10:29:38.338686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:25.639 [2024-07-26 10:29:38.338741] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:25.639 [2024-07-26 10:29:38.338758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:25.639 [2024-07-26 10:29:38.338848] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b0d910 00:19:25.639 [2024-07-26 10:29:38.338857] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:25.639 [2024-07-26 10:29:38.339006] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19d77f0 00:19:25.639 [2024-07-26 10:29:38.339119] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b0d910 00:19:25.639 [2024-07-26 10:29:38.339128] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b0d910 00:19:25.639 [2024-07-26 10:29:38.339229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.639 pt3 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.639 "name": "raid_bdev1", 00:19:25.639 "uuid": "bcc08c6b-0b24-41a7-9737-c871f42d7d46", 00:19:25.639 "strip_size_kb": 0, 00:19:25.639 "state": "online", 00:19:25.639 "raid_level": "raid1", 00:19:25.639 "superblock": true, 00:19:25.639 "num_base_bdevs": 3, 00:19:25.639 "num_base_bdevs_discovered": 2, 00:19:25.639 "num_base_bdevs_operational": 2, 00:19:25.639 "base_bdevs_list": [ 00:19:25.639 { 00:19:25.639 "name": null, 00:19:25.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.639 "is_configured": false, 00:19:25.639 "data_offset": 2048, 00:19:25.639 "data_size": 63488 00:19:25.639 }, 00:19:25.639 { 00:19:25.639 "name": "pt2", 00:19:25.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.639 "is_configured": true, 00:19:25.639 "data_offset": 2048, 00:19:25.639 "data_size": 63488 00:19:25.639 }, 00:19:25.639 { 00:19:25.639 "name": "pt3", 00:19:25.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.639 "is_configured": true, 00:19:25.639 "data_offset": 2048, 00:19:25.639 "data_size": 63488 00:19:25.639 } 00:19:25.639 ] 00:19:25.639 }' 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.639 10:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.207 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:19:26.207 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:26.466 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:19:26.466 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:26.466 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:19:26.726 [2024-07-26 10:29:39.453460] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' bcc08c6b-0b24-41a7-9737-c871f42d7d46 '!=' bcc08c6b-0b24-41a7-9737-c871f42d7d46 ']' 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3411480 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3411480 ']' 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3411480 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3411480 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3411480' 00:19:26.726 killing process with pid 3411480 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3411480 00:19:26.726 [2024-07-26 10:29:39.529676] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.726 [2024-07-26 10:29:39.529724] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.726 [2024-07-26 10:29:39.529771] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.726 [2024-07-26 10:29:39.529785] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b0d910 name raid_bdev1, state offline 00:19:26.726 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3411480 00:19:26.726 [2024-07-26 10:29:39.553269] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.986 10:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:26.986 00:19:26.986 real 0m20.629s 00:19:26.986 user 0m37.598s 00:19:26.986 sys 0m3.808s 00:19:26.986 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:26.986 10:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.986 ************************************ 00:19:26.986 END TEST raid_superblock_test 00:19:26.986 ************************************ 00:19:26.986 10:29:39 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:19:26.986 10:29:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:26.986 10:29:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.986 10:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.986 ************************************ 00:19:26.986 START TEST raid_read_error_test 00:19:26.986 ************************************ 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:19:26.986 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.omtf2QbzkM 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3415389 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3415389 /var/tmp/spdk-raid.sock 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3415389 ']' 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:26.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.987 10:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.987 [2024-07-26 10:29:39.888088] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:19:26.987 [2024-07-26 10:29:39.888152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415389 ] 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:27.246 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.246 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:27.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.247 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:27.247 [2024-07-26 10:29:40.022322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.247 [2024-07-26 10:29:40.067088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.247 [2024-07-26 10:29:40.130431] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.247 [2024-07-26 10:29:40.130470] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.184 10:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.184 10:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:28.184 10:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:28.184 10:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:28.184 BaseBdev1_malloc 00:19:28.184 10:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:28.443 true 00:19:28.443 10:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:28.702 [2024-07-26 10:29:41.414447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:28.702 [2024-07-26 10:29:41.414487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.702 [2024-07-26 10:29:41.414504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17517c0 00:19:28.702 [2024-07-26 10:29:41.414515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.702 [2024-07-26 10:29:41.415989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.702 [2024-07-26 10:29:41.416017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.702 BaseBdev1 00:19:28.702 10:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:28.702 10:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:28.961 BaseBdev2_malloc 00:19:28.961 10:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:29.220 true 00:19:29.220 10:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:29.220 [2024-07-26 10:29:42.088453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:29.220 [2024-07-26 10:29:42.088493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.220 [2024-07-26 10:29:42.088511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f8960 00:19:29.220 [2024-07-26 10:29:42.088522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.220 [2024-07-26 10:29:42.089777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.220 [2024-07-26 10:29:42.089802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:29.220 BaseBdev2 00:19:29.220 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:29.220 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:29.478 BaseBdev3_malloc 00:19:29.478 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:29.737 true 00:19:29.737 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:30.000 [2024-07-26 10:29:42.754301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:30.000 [2024-07-26 10:29:42.754336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.000 [2024-07-26 10:29:42.754351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16fb720 00:19:30.000 [2024-07-26 10:29:42.754362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.000 [2024-07-26 10:29:42.755600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.000 [2024-07-26 10:29:42.755625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:30.000 BaseBdev3 00:19:30.000 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:30.259 [2024-07-26 10:29:42.974967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.259 [2024-07-26 10:29:42.976037] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.259 [2024-07-26 10:29:42.976096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:30.259 [2024-07-26 10:29:42.976274] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x16fa5b0 00:19:30.259 [2024-07-26 10:29:42.976285] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:30.259 [2024-07-26 10:29:42.976448] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16fa320 00:19:30.259 [2024-07-26 10:29:42.976575] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16fa5b0 00:19:30.259 [2024-07-26 10:29:42.976585] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16fa5b0 00:19:30.259 [2024-07-26 10:29:42.976685] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.259 10:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.518 10:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.518 "name": "raid_bdev1", 00:19:30.518 "uuid": "e213f61e-1f02-46c7-9915-14c9f62312c2", 00:19:30.518 "strip_size_kb": 0, 00:19:30.518 "state": "online", 00:19:30.518 "raid_level": "raid1", 00:19:30.518 "superblock": true, 00:19:30.518 "num_base_bdevs": 3, 00:19:30.518 "num_base_bdevs_discovered": 3, 00:19:30.518 "num_base_bdevs_operational": 3, 00:19:30.518 "base_bdevs_list": [ 00:19:30.518 { 00:19:30.518 "name": "BaseBdev1", 00:19:30.518 "uuid": "50d588e5-ff40-5934-a703-a583e628f858", 00:19:30.518 "is_configured": true, 00:19:30.518 "data_offset": 2048, 00:19:30.518 "data_size": 63488 00:19:30.518 }, 00:19:30.518 { 00:19:30.518 "name": "BaseBdev2", 00:19:30.518 "uuid": "2b9f1a4c-99b0-502a-a3d0-87a79f159a0f", 00:19:30.518 "is_configured": true, 00:19:30.518 "data_offset": 2048, 00:19:30.518 "data_size": 63488 00:19:30.518 }, 00:19:30.518 { 00:19:30.518 "name": "BaseBdev3", 00:19:30.518 "uuid": "17870835-83d4-577b-a831-214e4c3415fe", 00:19:30.518 "is_configured": true, 00:19:30.518 "data_offset": 2048, 00:19:30.518 "data_size": 63488 00:19:30.518 } 00:19:30.518 ] 00:19:30.518 }' 00:19:30.518 10:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.518 10:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.085 10:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:31.085 10:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:31.085 [2024-07-26 10:29:43.829455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16fcca0 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.022 10:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.282 10:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.282 "name": "raid_bdev1", 00:19:32.282 "uuid": "e213f61e-1f02-46c7-9915-14c9f62312c2", 00:19:32.282 "strip_size_kb": 0, 00:19:32.282 "state": "online", 00:19:32.282 "raid_level": "raid1", 00:19:32.282 "superblock": true, 00:19:32.282 "num_base_bdevs": 3, 00:19:32.282 "num_base_bdevs_discovered": 3, 00:19:32.282 "num_base_bdevs_operational": 3, 00:19:32.282 "base_bdevs_list": [ 00:19:32.282 { 00:19:32.282 "name": "BaseBdev1", 00:19:32.282 "uuid": "50d588e5-ff40-5934-a703-a583e628f858", 00:19:32.282 "is_configured": true, 00:19:32.282 "data_offset": 2048, 00:19:32.282 "data_size": 63488 00:19:32.282 }, 00:19:32.282 { 00:19:32.282 "name": "BaseBdev2", 00:19:32.282 "uuid": "2b9f1a4c-99b0-502a-a3d0-87a79f159a0f", 00:19:32.282 "is_configured": true, 00:19:32.282 "data_offset": 2048, 00:19:32.282 "data_size": 63488 00:19:32.282 }, 00:19:32.282 { 00:19:32.282 "name": "BaseBdev3", 00:19:32.282 "uuid": "17870835-83d4-577b-a831-214e4c3415fe", 00:19:32.282 "is_configured": true, 00:19:32.282 "data_offset": 2048, 00:19:32.282 "data_size": 63488 00:19:32.282 } 00:19:32.282 ] 00:19:32.282 }' 00:19:32.282 10:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.282 10:29:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.850 10:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:33.419 [2024-07-26 10:29:46.212321] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.419 [2024-07-26 10:29:46.212356] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.419 [2024-07-26 10:29:46.215323] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.419 [2024-07-26 10:29:46.215355] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.419 [2024-07-26 10:29:46.215446] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.419 [2024-07-26 10:29:46.215457] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fa5b0 name raid_bdev1, state offline 00:19:33.419 0 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3415389 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3415389 ']' 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3415389 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3415389 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3415389' 00:19:33.419 killing process with pid 3415389 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3415389 00:19:33.419 [2024-07-26 10:29:46.301680] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.419 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3415389 00:19:33.419 [2024-07-26 10:29:46.319947] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.omtf2QbzkM 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:33.678 00:19:33.678 real 0m6.699s 00:19:33.678 user 0m10.600s 00:19:33.678 sys 0m1.150s 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.678 10:29:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.678 ************************************ 00:19:33.678 END TEST raid_read_error_test 00:19:33.678 ************************************ 00:19:33.678 10:29:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:19:33.678 10:29:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:33.678 10:29:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.678 10:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.939 ************************************ 00:19:33.939 START TEST raid_write_error_test 00:19:33.939 ************************************ 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.CwF2sLj3jK 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3416778 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3416778 /var/tmp/spdk-raid.sock 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3416778 ']' 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:33.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.939 10:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:33.939 [2024-07-26 10:29:46.666082] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:19:33.939 [2024-07-26 10:29:46.666148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416778 ] 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:33.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:33.939 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:33.939 [2024-07-26 10:29:46.801352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.199 [2024-07-26 10:29:46.846774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.199 [2024-07-26 10:29:46.905996] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.199 [2024-07-26 10:29:46.906030] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.136 10:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.136 10:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:35.136 10:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:35.136 10:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.395 BaseBdev1_malloc 00:19:35.395 10:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:35.962 true 00:19:35.962 10:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:35.962 [2024-07-26 10:29:48.807361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:35.962 [2024-07-26 10:29:48.807404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.962 [2024-07-26 10:29:48.807423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15197c0 00:19:35.962 [2024-07-26 10:29:48.807435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.962 [2024-07-26 10:29:48.809026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.962 [2024-07-26 10:29:48.809054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.962 BaseBdev1 00:19:35.962 10:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:35.962 10:29:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.530 BaseBdev2_malloc 00:19:36.530 10:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:36.789 true 00:19:36.789 10:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:37.355 [2024-07-26 10:29:50.046927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:37.355 [2024-07-26 10:29:50.046972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.355 [2024-07-26 10:29:50.046992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c0960 00:19:37.355 [2024-07-26 10:29:50.047003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.355 [2024-07-26 10:29:50.048403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.355 [2024-07-26 10:29:50.048430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:37.355 BaseBdev2 00:19:37.355 10:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:37.355 10:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:37.613 BaseBdev3_malloc 00:19:37.613 10:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:38.178 true 00:19:38.179 10:29:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:38.179 [2024-07-26 10:29:51.029803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:38.179 [2024-07-26 10:29:51.029844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.179 [2024-07-26 10:29:51.029863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c3720 00:19:38.179 [2024-07-26 10:29:51.029874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.179 [2024-07-26 10:29:51.031236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.179 [2024-07-26 10:29:51.031264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:38.179 BaseBdev3 00:19:38.179 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:38.745 [2024-07-26 10:29:51.527108] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.745 [2024-07-26 10:29:51.528282] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.745 [2024-07-26 10:29:51.528342] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.745 [2024-07-26 10:29:51.528515] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x14c25b0 00:19:38.745 [2024-07-26 10:29:51.528526] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:38.745 [2024-07-26 10:29:51.528704] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c2320 00:19:38.745 [2024-07-26 10:29:51.528837] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14c25b0 00:19:38.745 [2024-07-26 10:29:51.528846] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14c25b0 00:19:38.745 [2024-07-26 10:29:51.528954] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.745 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.003 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.003 "name": "raid_bdev1", 00:19:39.003 "uuid": "b0b69162-4c55-4468-b391-b85060aabc45", 00:19:39.003 "strip_size_kb": 0, 00:19:39.003 "state": "online", 00:19:39.003 "raid_level": "raid1", 00:19:39.003 "superblock": true, 00:19:39.003 "num_base_bdevs": 3, 00:19:39.003 "num_base_bdevs_discovered": 3, 00:19:39.003 "num_base_bdevs_operational": 3, 00:19:39.003 "base_bdevs_list": [ 00:19:39.003 { 00:19:39.003 "name": "BaseBdev1", 00:19:39.003 "uuid": "fc9df807-2c03-50dc-8067-d88a8bcb3989", 00:19:39.003 "is_configured": true, 00:19:39.003 "data_offset": 2048, 00:19:39.003 "data_size": 63488 00:19:39.003 }, 00:19:39.003 { 00:19:39.003 "name": "BaseBdev2", 00:19:39.003 "uuid": "7dec05fe-19ce-59c1-b152-67e4cebb5d56", 00:19:39.003 "is_configured": true, 00:19:39.003 "data_offset": 2048, 00:19:39.003 "data_size": 63488 00:19:39.003 }, 00:19:39.003 { 00:19:39.003 "name": "BaseBdev3", 00:19:39.004 "uuid": "48bc72e4-dc78-5540-b0f2-4da937d3afe0", 00:19:39.004 "is_configured": true, 00:19:39.004 "data_offset": 2048, 00:19:39.004 "data_size": 63488 00:19:39.004 } 00:19:39.004 ] 00:19:39.004 }' 00:19:39.004 10:29:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.004 10:29:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.570 10:29:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:39.570 10:29:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:39.570 [2024-07-26 10:29:52.457805] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c4ca0 00:19:40.507 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:40.765 [2024-07-26 10:29:53.567198] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:19:40.766 [2024-07-26 10:29:53.567255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.766 [2024-07-26 10:29:53.567446] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x14c4ca0 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.766 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.024 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.024 "name": "raid_bdev1", 00:19:41.024 "uuid": "b0b69162-4c55-4468-b391-b85060aabc45", 00:19:41.024 "strip_size_kb": 0, 00:19:41.024 "state": "online", 00:19:41.024 "raid_level": "raid1", 00:19:41.024 "superblock": true, 00:19:41.024 "num_base_bdevs": 3, 00:19:41.024 "num_base_bdevs_discovered": 2, 00:19:41.024 "num_base_bdevs_operational": 2, 00:19:41.024 "base_bdevs_list": [ 00:19:41.024 { 00:19:41.024 "name": null, 00:19:41.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.024 "is_configured": false, 00:19:41.024 "data_offset": 2048, 00:19:41.024 "data_size": 63488 00:19:41.024 }, 00:19:41.024 { 00:19:41.024 "name": "BaseBdev2", 00:19:41.024 "uuid": "7dec05fe-19ce-59c1-b152-67e4cebb5d56", 00:19:41.024 "is_configured": true, 00:19:41.024 "data_offset": 2048, 00:19:41.024 "data_size": 63488 00:19:41.024 }, 00:19:41.024 { 00:19:41.024 "name": "BaseBdev3", 00:19:41.024 "uuid": "48bc72e4-dc78-5540-b0f2-4da937d3afe0", 00:19:41.024 "is_configured": true, 00:19:41.024 "data_offset": 2048, 00:19:41.024 "data_size": 63488 00:19:41.024 } 00:19:41.024 ] 00:19:41.024 }' 00:19:41.024 10:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.024 10:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.590 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:41.848 [2024-07-26 10:29:54.612744] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.848 [2024-07-26 10:29:54.612778] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.848 [2024-07-26 10:29:54.615656] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.848 [2024-07-26 10:29:54.615684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.848 [2024-07-26 10:29:54.615750] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.848 [2024-07-26 10:29:54.615761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c25b0 name raid_bdev1, state offline 00:19:41.848 0 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3416778 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3416778 ']' 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3416778 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3416778 00:19:41.848 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:41.849 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:41.849 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3416778' 00:19:41.849 killing process with pid 3416778 00:19:41.849 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3416778 00:19:41.849 [2024-07-26 10:29:54.691105] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.849 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3416778 00:19:41.849 [2024-07-26 10:29:54.709457] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.CwF2sLj3jK 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:19:42.108 00:19:42.108 real 0m8.312s 00:19:42.108 user 0m13.538s 00:19:42.108 sys 0m1.431s 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.108 10:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 ************************************ 00:19:42.108 END TEST raid_write_error_test 00:19:42.108 ************************************ 00:19:42.108 10:29:54 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:19:42.108 10:29:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:19:42.108 10:29:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:19:42.108 10:29:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:42.108 10:29:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.108 10:29:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.108 ************************************ 00:19:42.108 START TEST raid_state_function_test 00:19:42.108 ************************************ 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:42.108 10:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:42.108 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3418200 00:19:42.108 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3418200' 00:19:42.108 Process raid pid: 3418200 00:19:42.108 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:42.108 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3418200 /var/tmp/spdk-raid.sock 00:19:42.108 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3418200 ']' 00:19:42.109 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:42.109 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.109 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:42.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:42.109 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.109 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.407 [2024-07-26 10:29:55.057917] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:19:42.407 [2024-07-26 10:29:55.057978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:42.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:42.407 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:42.407 [2024-07-26 10:29:55.194582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.407 [2024-07-26 10:29:55.237872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.666 [2024-07-26 10:29:55.303698] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.666 [2024-07-26 10:29:55.303729] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.234 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.234 10:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:43.234 10:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:43.235 [2024-07-26 10:29:56.064535] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:43.235 [2024-07-26 10:29:56.064582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:43.235 [2024-07-26 10:29:56.064592] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:43.235 [2024-07-26 10:29:56.064603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:43.235 [2024-07-26 10:29:56.064611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:43.235 [2024-07-26 10:29:56.064621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:43.235 [2024-07-26 10:29:56.064628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:43.235 [2024-07-26 10:29:56.064638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.235 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.494 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.494 "name": "Existed_Raid", 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "strip_size_kb": 64, 00:19:43.494 "state": "configuring", 00:19:43.494 "raid_level": "raid0", 00:19:43.494 "superblock": false, 00:19:43.494 "num_base_bdevs": 4, 00:19:43.494 "num_base_bdevs_discovered": 0, 00:19:43.494 "num_base_bdevs_operational": 4, 00:19:43.494 "base_bdevs_list": [ 00:19:43.494 { 00:19:43.494 "name": "BaseBdev1", 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "is_configured": false, 00:19:43.494 "data_offset": 0, 00:19:43.494 "data_size": 0 00:19:43.494 }, 00:19:43.494 { 00:19:43.494 "name": "BaseBdev2", 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "is_configured": false, 00:19:43.494 "data_offset": 0, 00:19:43.494 "data_size": 0 00:19:43.494 }, 00:19:43.494 { 00:19:43.494 "name": "BaseBdev3", 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "is_configured": false, 00:19:43.494 "data_offset": 0, 00:19:43.494 "data_size": 0 00:19:43.494 }, 00:19:43.494 { 00:19:43.494 "name": "BaseBdev4", 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "is_configured": false, 00:19:43.494 "data_offset": 0, 00:19:43.494 "data_size": 0 00:19:43.494 } 00:19:43.494 ] 00:19:43.494 }' 00:19:43.494 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.494 10:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.063 10:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:44.322 [2024-07-26 10:29:57.095144] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:44.322 [2024-07-26 10:29:57.095179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xacdb70 name Existed_Raid, state configuring 00:19:44.322 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:44.581 [2024-07-26 10:29:57.323764] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:44.581 [2024-07-26 10:29:57.323797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:44.581 [2024-07-26 10:29:57.323806] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:44.581 [2024-07-26 10:29:57.323817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:44.581 [2024-07-26 10:29:57.323825] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:44.581 [2024-07-26 10:29:57.323835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:44.581 [2024-07-26 10:29:57.323843] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:44.581 [2024-07-26 10:29:57.323853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:44.581 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:44.840 [2024-07-26 10:29:57.553682] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.840 BaseBdev1 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:44.840 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.099 10:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:45.358 [ 00:19:45.358 { 00:19:45.358 "name": "BaseBdev1", 00:19:45.358 "aliases": [ 00:19:45.358 "9016274e-9876-4336-bcb1-f9de0add1e78" 00:19:45.358 ], 00:19:45.358 "product_name": "Malloc disk", 00:19:45.358 "block_size": 512, 00:19:45.358 "num_blocks": 65536, 00:19:45.358 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:45.358 "assigned_rate_limits": { 00:19:45.358 "rw_ios_per_sec": 0, 00:19:45.358 "rw_mbytes_per_sec": 0, 00:19:45.358 "r_mbytes_per_sec": 0, 00:19:45.358 "w_mbytes_per_sec": 0 00:19:45.358 }, 00:19:45.358 "claimed": true, 00:19:45.358 "claim_type": "exclusive_write", 00:19:45.358 "zoned": false, 00:19:45.358 "supported_io_types": { 00:19:45.358 "read": true, 00:19:45.358 "write": true, 00:19:45.358 "unmap": true, 00:19:45.358 "flush": true, 00:19:45.358 "reset": true, 00:19:45.358 "nvme_admin": false, 00:19:45.358 "nvme_io": false, 00:19:45.358 "nvme_io_md": false, 00:19:45.358 "write_zeroes": true, 00:19:45.358 "zcopy": true, 00:19:45.358 "get_zone_info": false, 00:19:45.358 "zone_management": false, 00:19:45.358 "zone_append": false, 00:19:45.358 "compare": false, 00:19:45.358 "compare_and_write": false, 00:19:45.358 "abort": true, 00:19:45.358 "seek_hole": false, 00:19:45.358 "seek_data": false, 00:19:45.358 "copy": true, 00:19:45.358 "nvme_iov_md": false 00:19:45.358 }, 00:19:45.358 "memory_domains": [ 00:19:45.358 { 00:19:45.358 "dma_device_id": "system", 00:19:45.358 "dma_device_type": 1 00:19:45.358 }, 00:19:45.358 { 00:19:45.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.358 "dma_device_type": 2 00:19:45.358 } 00:19:45.358 ], 00:19:45.358 "driver_specific": {} 00:19:45.358 } 00:19:45.358 ] 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.358 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.358 "name": "Existed_Raid", 00:19:45.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.358 "strip_size_kb": 64, 00:19:45.358 "state": "configuring", 00:19:45.358 "raid_level": "raid0", 00:19:45.358 "superblock": false, 00:19:45.358 "num_base_bdevs": 4, 00:19:45.358 "num_base_bdevs_discovered": 1, 00:19:45.358 "num_base_bdevs_operational": 4, 00:19:45.358 "base_bdevs_list": [ 00:19:45.358 { 00:19:45.358 "name": "BaseBdev1", 00:19:45.358 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:45.358 "is_configured": true, 00:19:45.358 "data_offset": 0, 00:19:45.358 "data_size": 65536 00:19:45.358 }, 00:19:45.358 { 00:19:45.358 "name": "BaseBdev2", 00:19:45.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.358 "is_configured": false, 00:19:45.358 "data_offset": 0, 00:19:45.358 "data_size": 0 00:19:45.358 }, 00:19:45.358 { 00:19:45.358 "name": "BaseBdev3", 00:19:45.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.358 "is_configured": false, 00:19:45.358 "data_offset": 0, 00:19:45.358 "data_size": 0 00:19:45.358 }, 00:19:45.358 { 00:19:45.358 "name": "BaseBdev4", 00:19:45.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.358 "is_configured": false, 00:19:45.358 "data_offset": 0, 00:19:45.358 "data_size": 0 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }' 00:19:45.359 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.359 10:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.927 10:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:46.186 [2024-07-26 10:29:59.033578] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:46.186 [2024-07-26 10:29:59.033618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xacd4a0 name Existed_Raid, state configuring 00:19:46.186 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:46.445 [2024-07-26 10:29:59.262223] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.445 [2024-07-26 10:29:59.263596] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.445 [2024-07-26 10:29:59.263628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.445 [2024-07-26 10:29:59.263638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:46.445 [2024-07-26 10:29:59.263648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:46.445 [2024-07-26 10:29:59.263656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:46.445 [2024-07-26 10:29:59.263667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.446 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.705 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.705 "name": "Existed_Raid", 00:19:46.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.705 "strip_size_kb": 64, 00:19:46.705 "state": "configuring", 00:19:46.705 "raid_level": "raid0", 00:19:46.705 "superblock": false, 00:19:46.705 "num_base_bdevs": 4, 00:19:46.705 "num_base_bdevs_discovered": 1, 00:19:46.705 "num_base_bdevs_operational": 4, 00:19:46.705 "base_bdevs_list": [ 00:19:46.705 { 00:19:46.705 "name": "BaseBdev1", 00:19:46.705 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:46.705 "is_configured": true, 00:19:46.705 "data_offset": 0, 00:19:46.705 "data_size": 65536 00:19:46.705 }, 00:19:46.705 { 00:19:46.705 "name": "BaseBdev2", 00:19:46.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.705 "is_configured": false, 00:19:46.705 "data_offset": 0, 00:19:46.705 "data_size": 0 00:19:46.705 }, 00:19:46.705 { 00:19:46.705 "name": "BaseBdev3", 00:19:46.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.705 "is_configured": false, 00:19:46.705 "data_offset": 0, 00:19:46.705 "data_size": 0 00:19:46.705 }, 00:19:46.705 { 00:19:46.705 "name": "BaseBdev4", 00:19:46.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.705 "is_configured": false, 00:19:46.705 "data_offset": 0, 00:19:46.705 "data_size": 0 00:19:46.705 } 00:19:46.705 ] 00:19:46.705 }' 00:19:46.705 10:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.705 10:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.274 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:47.533 [2024-07-26 10:30:00.296170] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.533 BaseBdev2 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:47.533 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.793 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:48.052 [ 00:19:48.052 { 00:19:48.052 "name": "BaseBdev2", 00:19:48.052 "aliases": [ 00:19:48.052 "6b301594-f168-472f-93d4-82b263f7df23" 00:19:48.052 ], 00:19:48.052 "product_name": "Malloc disk", 00:19:48.052 "block_size": 512, 00:19:48.052 "num_blocks": 65536, 00:19:48.052 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:48.052 "assigned_rate_limits": { 00:19:48.052 "rw_ios_per_sec": 0, 00:19:48.052 "rw_mbytes_per_sec": 0, 00:19:48.052 "r_mbytes_per_sec": 0, 00:19:48.052 "w_mbytes_per_sec": 0 00:19:48.052 }, 00:19:48.052 "claimed": true, 00:19:48.052 "claim_type": "exclusive_write", 00:19:48.052 "zoned": false, 00:19:48.052 "supported_io_types": { 00:19:48.052 "read": true, 00:19:48.052 "write": true, 00:19:48.052 "unmap": true, 00:19:48.052 "flush": true, 00:19:48.052 "reset": true, 00:19:48.052 "nvme_admin": false, 00:19:48.052 "nvme_io": false, 00:19:48.052 "nvme_io_md": false, 00:19:48.052 "write_zeroes": true, 00:19:48.052 "zcopy": true, 00:19:48.052 "get_zone_info": false, 00:19:48.052 "zone_management": false, 00:19:48.052 "zone_append": false, 00:19:48.052 "compare": false, 00:19:48.052 "compare_and_write": false, 00:19:48.052 "abort": true, 00:19:48.052 "seek_hole": false, 00:19:48.052 "seek_data": false, 00:19:48.052 "copy": true, 00:19:48.052 "nvme_iov_md": false 00:19:48.052 }, 00:19:48.052 "memory_domains": [ 00:19:48.052 { 00:19:48.052 "dma_device_id": "system", 00:19:48.052 "dma_device_type": 1 00:19:48.052 }, 00:19:48.052 { 00:19:48.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.052 "dma_device_type": 2 00:19:48.052 } 00:19:48.052 ], 00:19:48.052 "driver_specific": {} 00:19:48.052 } 00:19:48.052 ] 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.052 10:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.311 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.312 "name": "Existed_Raid", 00:19:48.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.312 "strip_size_kb": 64, 00:19:48.312 "state": "configuring", 00:19:48.312 "raid_level": "raid0", 00:19:48.312 "superblock": false, 00:19:48.312 "num_base_bdevs": 4, 00:19:48.312 "num_base_bdevs_discovered": 2, 00:19:48.312 "num_base_bdevs_operational": 4, 00:19:48.312 "base_bdevs_list": [ 00:19:48.312 { 00:19:48.312 "name": "BaseBdev1", 00:19:48.312 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:48.312 "is_configured": true, 00:19:48.312 "data_offset": 0, 00:19:48.312 "data_size": 65536 00:19:48.312 }, 00:19:48.312 { 00:19:48.312 "name": "BaseBdev2", 00:19:48.312 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:48.312 "is_configured": true, 00:19:48.312 "data_offset": 0, 00:19:48.312 "data_size": 65536 00:19:48.312 }, 00:19:48.312 { 00:19:48.312 "name": "BaseBdev3", 00:19:48.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.312 "is_configured": false, 00:19:48.312 "data_offset": 0, 00:19:48.312 "data_size": 0 00:19:48.312 }, 00:19:48.312 { 00:19:48.312 "name": "BaseBdev4", 00:19:48.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.312 "is_configured": false, 00:19:48.312 "data_offset": 0, 00:19:48.312 "data_size": 0 00:19:48.312 } 00:19:48.312 ] 00:19:48.312 }' 00:19:48.312 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.312 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.878 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:49.137 [2024-07-26 10:30:01.803275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.137 BaseBdev3 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:49.137 10:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:49.137 10:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:49.396 [ 00:19:49.396 { 00:19:49.396 "name": "BaseBdev3", 00:19:49.396 "aliases": [ 00:19:49.396 "ed190f84-8c41-4d7b-99ca-475760cb50a4" 00:19:49.396 ], 00:19:49.396 "product_name": "Malloc disk", 00:19:49.396 "block_size": 512, 00:19:49.396 "num_blocks": 65536, 00:19:49.396 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:49.396 "assigned_rate_limits": { 00:19:49.396 "rw_ios_per_sec": 0, 00:19:49.396 "rw_mbytes_per_sec": 0, 00:19:49.396 "r_mbytes_per_sec": 0, 00:19:49.396 "w_mbytes_per_sec": 0 00:19:49.396 }, 00:19:49.396 "claimed": true, 00:19:49.396 "claim_type": "exclusive_write", 00:19:49.396 "zoned": false, 00:19:49.396 "supported_io_types": { 00:19:49.396 "read": true, 00:19:49.396 "write": true, 00:19:49.396 "unmap": true, 00:19:49.396 "flush": true, 00:19:49.396 "reset": true, 00:19:49.396 "nvme_admin": false, 00:19:49.396 "nvme_io": false, 00:19:49.396 "nvme_io_md": false, 00:19:49.396 "write_zeroes": true, 00:19:49.396 "zcopy": true, 00:19:49.396 "get_zone_info": false, 00:19:49.396 "zone_management": false, 00:19:49.396 "zone_append": false, 00:19:49.396 "compare": false, 00:19:49.396 "compare_and_write": false, 00:19:49.396 "abort": true, 00:19:49.396 "seek_hole": false, 00:19:49.396 "seek_data": false, 00:19:49.396 "copy": true, 00:19:49.396 "nvme_iov_md": false 00:19:49.396 }, 00:19:49.396 "memory_domains": [ 00:19:49.396 { 00:19:49.396 "dma_device_id": "system", 00:19:49.396 "dma_device_type": 1 00:19:49.396 }, 00:19:49.396 { 00:19:49.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.396 "dma_device_type": 2 00:19:49.396 } 00:19:49.396 ], 00:19:49.396 "driver_specific": {} 00:19:49.396 } 00:19:49.396 ] 00:19:49.396 10:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:49.396 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:49.396 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:49.396 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:49.396 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.397 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.656 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.656 "name": "Existed_Raid", 00:19:49.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.656 "strip_size_kb": 64, 00:19:49.656 "state": "configuring", 00:19:49.656 "raid_level": "raid0", 00:19:49.656 "superblock": false, 00:19:49.656 "num_base_bdevs": 4, 00:19:49.656 "num_base_bdevs_discovered": 3, 00:19:49.656 "num_base_bdevs_operational": 4, 00:19:49.656 "base_bdevs_list": [ 00:19:49.656 { 00:19:49.656 "name": "BaseBdev1", 00:19:49.656 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:49.656 "is_configured": true, 00:19:49.656 "data_offset": 0, 00:19:49.656 "data_size": 65536 00:19:49.656 }, 00:19:49.656 { 00:19:49.656 "name": "BaseBdev2", 00:19:49.656 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:49.656 "is_configured": true, 00:19:49.656 "data_offset": 0, 00:19:49.656 "data_size": 65536 00:19:49.656 }, 00:19:49.656 { 00:19:49.656 "name": "BaseBdev3", 00:19:49.656 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:49.656 "is_configured": true, 00:19:49.656 "data_offset": 0, 00:19:49.656 "data_size": 65536 00:19:49.656 }, 00:19:49.656 { 00:19:49.656 "name": "BaseBdev4", 00:19:49.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.656 "is_configured": false, 00:19:49.656 "data_offset": 0, 00:19:49.656 "data_size": 0 00:19:49.656 } 00:19:49.656 ] 00:19:49.656 }' 00:19:49.656 10:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.656 10:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.224 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:50.484 [2024-07-26 10:30:03.258234] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:50.484 [2024-07-26 10:30:03.258266] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc803e0 00:19:50.484 [2024-07-26 10:30:03.258274] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:50.484 [2024-07-26 10:30:03.258453] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xac76e0 00:19:50.484 [2024-07-26 10:30:03.258564] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc803e0 00:19:50.484 [2024-07-26 10:30:03.258573] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xc803e0 00:19:50.484 [2024-07-26 10:30:03.258725] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.484 BaseBdev4 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:50.484 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.742 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:51.001 [ 00:19:51.001 { 00:19:51.001 "name": "BaseBdev4", 00:19:51.001 "aliases": [ 00:19:51.001 "df8cefd0-579c-4811-a3e7-8643f6c1fe7d" 00:19:51.001 ], 00:19:51.001 "product_name": "Malloc disk", 00:19:51.001 "block_size": 512, 00:19:51.001 "num_blocks": 65536, 00:19:51.001 "uuid": "df8cefd0-579c-4811-a3e7-8643f6c1fe7d", 00:19:51.001 "assigned_rate_limits": { 00:19:51.001 "rw_ios_per_sec": 0, 00:19:51.001 "rw_mbytes_per_sec": 0, 00:19:51.001 "r_mbytes_per_sec": 0, 00:19:51.001 "w_mbytes_per_sec": 0 00:19:51.001 }, 00:19:51.001 "claimed": true, 00:19:51.001 "claim_type": "exclusive_write", 00:19:51.001 "zoned": false, 00:19:51.001 "supported_io_types": { 00:19:51.001 "read": true, 00:19:51.001 "write": true, 00:19:51.001 "unmap": true, 00:19:51.001 "flush": true, 00:19:51.001 "reset": true, 00:19:51.001 "nvme_admin": false, 00:19:51.001 "nvme_io": false, 00:19:51.001 "nvme_io_md": false, 00:19:51.001 "write_zeroes": true, 00:19:51.001 "zcopy": true, 00:19:51.001 "get_zone_info": false, 00:19:51.001 "zone_management": false, 00:19:51.001 "zone_append": false, 00:19:51.001 "compare": false, 00:19:51.001 "compare_and_write": false, 00:19:51.001 "abort": true, 00:19:51.001 "seek_hole": false, 00:19:51.001 "seek_data": false, 00:19:51.001 "copy": true, 00:19:51.001 "nvme_iov_md": false 00:19:51.001 }, 00:19:51.001 "memory_domains": [ 00:19:51.001 { 00:19:51.001 "dma_device_id": "system", 00:19:51.001 "dma_device_type": 1 00:19:51.001 }, 00:19:51.001 { 00:19:51.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.001 "dma_device_type": 2 00:19:51.001 } 00:19:51.001 ], 00:19:51.001 "driver_specific": {} 00:19:51.001 } 00:19:51.001 ] 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.001 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.260 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.260 "name": "Existed_Raid", 00:19:51.260 "uuid": "4bcefa3e-753f-4a5f-9267-11564fa5bcb7", 00:19:51.260 "strip_size_kb": 64, 00:19:51.260 "state": "online", 00:19:51.260 "raid_level": "raid0", 00:19:51.260 "superblock": false, 00:19:51.260 "num_base_bdevs": 4, 00:19:51.260 "num_base_bdevs_discovered": 4, 00:19:51.260 "num_base_bdevs_operational": 4, 00:19:51.260 "base_bdevs_list": [ 00:19:51.260 { 00:19:51.260 "name": "BaseBdev1", 00:19:51.260 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 0, 00:19:51.260 "data_size": 65536 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev2", 00:19:51.260 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 0, 00:19:51.260 "data_size": 65536 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev3", 00:19:51.260 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 0, 00:19:51.260 "data_size": 65536 00:19:51.260 }, 00:19:51.260 { 00:19:51.260 "name": "BaseBdev4", 00:19:51.260 "uuid": "df8cefd0-579c-4811-a3e7-8643f6c1fe7d", 00:19:51.260 "is_configured": true, 00:19:51.260 "data_offset": 0, 00:19:51.260 "data_size": 65536 00:19:51.260 } 00:19:51.260 ] 00:19:51.260 }' 00:19:51.260 10:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.260 10:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:51.828 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:52.087 [2024-07-26 10:30:04.770702] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.087 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:52.087 "name": "Existed_Raid", 00:19:52.087 "aliases": [ 00:19:52.087 "4bcefa3e-753f-4a5f-9267-11564fa5bcb7" 00:19:52.087 ], 00:19:52.087 "product_name": "Raid Volume", 00:19:52.087 "block_size": 512, 00:19:52.087 "num_blocks": 262144, 00:19:52.087 "uuid": "4bcefa3e-753f-4a5f-9267-11564fa5bcb7", 00:19:52.087 "assigned_rate_limits": { 00:19:52.087 "rw_ios_per_sec": 0, 00:19:52.087 "rw_mbytes_per_sec": 0, 00:19:52.087 "r_mbytes_per_sec": 0, 00:19:52.087 "w_mbytes_per_sec": 0 00:19:52.087 }, 00:19:52.087 "claimed": false, 00:19:52.087 "zoned": false, 00:19:52.087 "supported_io_types": { 00:19:52.087 "read": true, 00:19:52.087 "write": true, 00:19:52.087 "unmap": true, 00:19:52.087 "flush": true, 00:19:52.087 "reset": true, 00:19:52.087 "nvme_admin": false, 00:19:52.087 "nvme_io": false, 00:19:52.087 "nvme_io_md": false, 00:19:52.087 "write_zeroes": true, 00:19:52.087 "zcopy": false, 00:19:52.087 "get_zone_info": false, 00:19:52.087 "zone_management": false, 00:19:52.087 "zone_append": false, 00:19:52.087 "compare": false, 00:19:52.087 "compare_and_write": false, 00:19:52.087 "abort": false, 00:19:52.087 "seek_hole": false, 00:19:52.087 "seek_data": false, 00:19:52.087 "copy": false, 00:19:52.087 "nvme_iov_md": false 00:19:52.087 }, 00:19:52.087 "memory_domains": [ 00:19:52.087 { 00:19:52.087 "dma_device_id": "system", 00:19:52.087 "dma_device_type": 1 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.087 "dma_device_type": 2 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "system", 00:19:52.087 "dma_device_type": 1 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.087 "dma_device_type": 2 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "system", 00:19:52.087 "dma_device_type": 1 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.087 "dma_device_type": 2 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "system", 00:19:52.087 "dma_device_type": 1 00:19:52.087 }, 00:19:52.087 { 00:19:52.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.087 "dma_device_type": 2 00:19:52.087 } 00:19:52.087 ], 00:19:52.087 "driver_specific": { 00:19:52.087 "raid": { 00:19:52.087 "uuid": "4bcefa3e-753f-4a5f-9267-11564fa5bcb7", 00:19:52.087 "strip_size_kb": 64, 00:19:52.087 "state": "online", 00:19:52.087 "raid_level": "raid0", 00:19:52.087 "superblock": false, 00:19:52.087 "num_base_bdevs": 4, 00:19:52.087 "num_base_bdevs_discovered": 4, 00:19:52.087 "num_base_bdevs_operational": 4, 00:19:52.087 "base_bdevs_list": [ 00:19:52.087 { 00:19:52.088 "name": "BaseBdev1", 00:19:52.088 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:52.088 "is_configured": true, 00:19:52.088 "data_offset": 0, 00:19:52.088 "data_size": 65536 00:19:52.088 }, 00:19:52.088 { 00:19:52.088 "name": "BaseBdev2", 00:19:52.088 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:52.088 "is_configured": true, 00:19:52.088 "data_offset": 0, 00:19:52.088 "data_size": 65536 00:19:52.088 }, 00:19:52.088 { 00:19:52.088 "name": "BaseBdev3", 00:19:52.088 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:52.088 "is_configured": true, 00:19:52.088 "data_offset": 0, 00:19:52.088 "data_size": 65536 00:19:52.088 }, 00:19:52.088 { 00:19:52.088 "name": "BaseBdev4", 00:19:52.088 "uuid": "df8cefd0-579c-4811-a3e7-8643f6c1fe7d", 00:19:52.088 "is_configured": true, 00:19:52.088 "data_offset": 0, 00:19:52.088 "data_size": 65536 00:19:52.088 } 00:19:52.088 ] 00:19:52.088 } 00:19:52.088 } 00:19:52.088 }' 00:19:52.088 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.088 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:52.088 BaseBdev2 00:19:52.088 BaseBdev3 00:19:52.088 BaseBdev4' 00:19:52.088 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:52.088 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:52.088 10:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:52.347 "name": "BaseBdev1", 00:19:52.347 "aliases": [ 00:19:52.347 "9016274e-9876-4336-bcb1-f9de0add1e78" 00:19:52.347 ], 00:19:52.347 "product_name": "Malloc disk", 00:19:52.347 "block_size": 512, 00:19:52.347 "num_blocks": 65536, 00:19:52.347 "uuid": "9016274e-9876-4336-bcb1-f9de0add1e78", 00:19:52.347 "assigned_rate_limits": { 00:19:52.347 "rw_ios_per_sec": 0, 00:19:52.347 "rw_mbytes_per_sec": 0, 00:19:52.347 "r_mbytes_per_sec": 0, 00:19:52.347 "w_mbytes_per_sec": 0 00:19:52.347 }, 00:19:52.347 "claimed": true, 00:19:52.347 "claim_type": "exclusive_write", 00:19:52.347 "zoned": false, 00:19:52.347 "supported_io_types": { 00:19:52.347 "read": true, 00:19:52.347 "write": true, 00:19:52.347 "unmap": true, 00:19:52.347 "flush": true, 00:19:52.347 "reset": true, 00:19:52.347 "nvme_admin": false, 00:19:52.347 "nvme_io": false, 00:19:52.347 "nvme_io_md": false, 00:19:52.347 "write_zeroes": true, 00:19:52.347 "zcopy": true, 00:19:52.347 "get_zone_info": false, 00:19:52.347 "zone_management": false, 00:19:52.347 "zone_append": false, 00:19:52.347 "compare": false, 00:19:52.347 "compare_and_write": false, 00:19:52.347 "abort": true, 00:19:52.347 "seek_hole": false, 00:19:52.347 "seek_data": false, 00:19:52.347 "copy": true, 00:19:52.347 "nvme_iov_md": false 00:19:52.347 }, 00:19:52.347 "memory_domains": [ 00:19:52.347 { 00:19:52.347 "dma_device_id": "system", 00:19:52.347 "dma_device_type": 1 00:19:52.347 }, 00:19:52.347 { 00:19:52.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.347 "dma_device_type": 2 00:19:52.347 } 00:19:52.347 ], 00:19:52.347 "driver_specific": {} 00:19:52.347 }' 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:52.347 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:52.607 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:52.866 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:52.866 "name": "BaseBdev2", 00:19:52.866 "aliases": [ 00:19:52.866 "6b301594-f168-472f-93d4-82b263f7df23" 00:19:52.866 ], 00:19:52.866 "product_name": "Malloc disk", 00:19:52.866 "block_size": 512, 00:19:52.866 "num_blocks": 65536, 00:19:52.866 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:52.866 "assigned_rate_limits": { 00:19:52.866 "rw_ios_per_sec": 0, 00:19:52.866 "rw_mbytes_per_sec": 0, 00:19:52.866 "r_mbytes_per_sec": 0, 00:19:52.866 "w_mbytes_per_sec": 0 00:19:52.866 }, 00:19:52.866 "claimed": true, 00:19:52.866 "claim_type": "exclusive_write", 00:19:52.866 "zoned": false, 00:19:52.866 "supported_io_types": { 00:19:52.866 "read": true, 00:19:52.866 "write": true, 00:19:52.866 "unmap": true, 00:19:52.866 "flush": true, 00:19:52.866 "reset": true, 00:19:52.866 "nvme_admin": false, 00:19:52.866 "nvme_io": false, 00:19:52.866 "nvme_io_md": false, 00:19:52.866 "write_zeroes": true, 00:19:52.866 "zcopy": true, 00:19:52.866 "get_zone_info": false, 00:19:52.866 "zone_management": false, 00:19:52.866 "zone_append": false, 00:19:52.866 "compare": false, 00:19:52.866 "compare_and_write": false, 00:19:52.866 "abort": true, 00:19:52.866 "seek_hole": false, 00:19:52.866 "seek_data": false, 00:19:52.866 "copy": true, 00:19:52.866 "nvme_iov_md": false 00:19:52.866 }, 00:19:52.866 "memory_domains": [ 00:19:52.866 { 00:19:52.867 "dma_device_id": "system", 00:19:52.867 "dma_device_type": 1 00:19:52.867 }, 00:19:52.867 { 00:19:52.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.867 "dma_device_type": 2 00:19:52.867 } 00:19:52.867 ], 00:19:52.867 "driver_specific": {} 00:19:52.867 }' 00:19:52.867 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.867 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:52.867 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:52.867 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.867 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:53.126 10:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:53.385 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:53.385 "name": "BaseBdev3", 00:19:53.385 "aliases": [ 00:19:53.385 "ed190f84-8c41-4d7b-99ca-475760cb50a4" 00:19:53.385 ], 00:19:53.385 "product_name": "Malloc disk", 00:19:53.385 "block_size": 512, 00:19:53.385 "num_blocks": 65536, 00:19:53.385 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:53.385 "assigned_rate_limits": { 00:19:53.385 "rw_ios_per_sec": 0, 00:19:53.385 "rw_mbytes_per_sec": 0, 00:19:53.385 "r_mbytes_per_sec": 0, 00:19:53.385 "w_mbytes_per_sec": 0 00:19:53.385 }, 00:19:53.385 "claimed": true, 00:19:53.385 "claim_type": "exclusive_write", 00:19:53.385 "zoned": false, 00:19:53.385 "supported_io_types": { 00:19:53.385 "read": true, 00:19:53.385 "write": true, 00:19:53.385 "unmap": true, 00:19:53.385 "flush": true, 00:19:53.385 "reset": true, 00:19:53.385 "nvme_admin": false, 00:19:53.385 "nvme_io": false, 00:19:53.385 "nvme_io_md": false, 00:19:53.385 "write_zeroes": true, 00:19:53.385 "zcopy": true, 00:19:53.385 "get_zone_info": false, 00:19:53.385 "zone_management": false, 00:19:53.385 "zone_append": false, 00:19:53.385 "compare": false, 00:19:53.385 "compare_and_write": false, 00:19:53.385 "abort": true, 00:19:53.385 "seek_hole": false, 00:19:53.385 "seek_data": false, 00:19:53.385 "copy": true, 00:19:53.385 "nvme_iov_md": false 00:19:53.385 }, 00:19:53.385 "memory_domains": [ 00:19:53.385 { 00:19:53.385 "dma_device_id": "system", 00:19:53.385 "dma_device_type": 1 00:19:53.385 }, 00:19:53.385 { 00:19:53.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.385 "dma_device_type": 2 00:19:53.385 } 00:19:53.385 ], 00:19:53.385 "driver_specific": {} 00:19:53.385 }' 00:19:53.385 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.385 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.644 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:53.903 "name": "BaseBdev4", 00:19:53.903 "aliases": [ 00:19:53.903 "df8cefd0-579c-4811-a3e7-8643f6c1fe7d" 00:19:53.903 ], 00:19:53.903 "product_name": "Malloc disk", 00:19:53.903 "block_size": 512, 00:19:53.903 "num_blocks": 65536, 00:19:53.903 "uuid": "df8cefd0-579c-4811-a3e7-8643f6c1fe7d", 00:19:53.903 "assigned_rate_limits": { 00:19:53.903 "rw_ios_per_sec": 0, 00:19:53.903 "rw_mbytes_per_sec": 0, 00:19:53.903 "r_mbytes_per_sec": 0, 00:19:53.903 "w_mbytes_per_sec": 0 00:19:53.903 }, 00:19:53.903 "claimed": true, 00:19:53.903 "claim_type": "exclusive_write", 00:19:53.903 "zoned": false, 00:19:53.903 "supported_io_types": { 00:19:53.903 "read": true, 00:19:53.903 "write": true, 00:19:53.903 "unmap": true, 00:19:53.903 "flush": true, 00:19:53.903 "reset": true, 00:19:53.903 "nvme_admin": false, 00:19:53.903 "nvme_io": false, 00:19:53.903 "nvme_io_md": false, 00:19:53.903 "write_zeroes": true, 00:19:53.903 "zcopy": true, 00:19:53.903 "get_zone_info": false, 00:19:53.903 "zone_management": false, 00:19:53.903 "zone_append": false, 00:19:53.903 "compare": false, 00:19:53.903 "compare_and_write": false, 00:19:53.903 "abort": true, 00:19:53.903 "seek_hole": false, 00:19:53.903 "seek_data": false, 00:19:53.903 "copy": true, 00:19:53.903 "nvme_iov_md": false 00:19:53.903 }, 00:19:53.903 "memory_domains": [ 00:19:53.903 { 00:19:53.903 "dma_device_id": "system", 00:19:53.903 "dma_device_type": 1 00:19:53.903 }, 00:19:53.903 { 00:19:53.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.903 "dma_device_type": 2 00:19:53.903 } 00:19:53.903 ], 00:19:53.903 "driver_specific": {} 00:19:53.903 }' 00:19:53.903 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.204 10:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.204 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.204 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.204 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.463 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:54.463 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.463 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:54.463 [2024-07-26 10:30:07.345225] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.463 [2024-07-26 10:30:07.345252] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.463 [2024-07-26 10:30:07.345296] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.723 "name": "Existed_Raid", 00:19:54.723 "uuid": "4bcefa3e-753f-4a5f-9267-11564fa5bcb7", 00:19:54.723 "strip_size_kb": 64, 00:19:54.723 "state": "offline", 00:19:54.723 "raid_level": "raid0", 00:19:54.723 "superblock": false, 00:19:54.723 "num_base_bdevs": 4, 00:19:54.723 "num_base_bdevs_discovered": 3, 00:19:54.723 "num_base_bdevs_operational": 3, 00:19:54.723 "base_bdevs_list": [ 00:19:54.723 { 00:19:54.723 "name": null, 00:19:54.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.723 "is_configured": false, 00:19:54.723 "data_offset": 0, 00:19:54.723 "data_size": 65536 00:19:54.723 }, 00:19:54.723 { 00:19:54.723 "name": "BaseBdev2", 00:19:54.723 "uuid": "6b301594-f168-472f-93d4-82b263f7df23", 00:19:54.723 "is_configured": true, 00:19:54.723 "data_offset": 0, 00:19:54.723 "data_size": 65536 00:19:54.723 }, 00:19:54.723 { 00:19:54.723 "name": "BaseBdev3", 00:19:54.723 "uuid": "ed190f84-8c41-4d7b-99ca-475760cb50a4", 00:19:54.723 "is_configured": true, 00:19:54.723 "data_offset": 0, 00:19:54.723 "data_size": 65536 00:19:54.723 }, 00:19:54.723 { 00:19:54.723 "name": "BaseBdev4", 00:19:54.723 "uuid": "df8cefd0-579c-4811-a3e7-8643f6c1fe7d", 00:19:54.723 "is_configured": true, 00:19:54.723 "data_offset": 0, 00:19:54.723 "data_size": 65536 00:19:54.723 } 00:19:54.723 ] 00:19:54.723 }' 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.723 10:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.291 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:55.291 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:55.291 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.291 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:55.549 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:55.549 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.550 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:55.808 [2024-07-26 10:30:08.609512] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:55.809 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:55.809 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:55.809 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.809 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:56.068 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:56.068 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.068 10:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:56.327 [2024-07-26 10:30:09.077024] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:56.327 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:56.327 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:56.327 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.327 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:56.586 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:56.586 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.586 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:56.845 [2024-07-26 10:30:09.540332] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:56.845 [2024-07-26 10:30:09.540375] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc803e0 name Existed_Raid, state offline 00:19:56.845 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:56.845 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:56.845 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.845 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:57.104 10:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:57.362 BaseBdev2 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.362 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.621 [ 00:19:57.621 { 00:19:57.621 "name": "BaseBdev2", 00:19:57.621 "aliases": [ 00:19:57.621 "e5396796-1912-4503-b068-ff026556c646" 00:19:57.621 ], 00:19:57.621 "product_name": "Malloc disk", 00:19:57.621 "block_size": 512, 00:19:57.621 "num_blocks": 65536, 00:19:57.621 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:19:57.621 "assigned_rate_limits": { 00:19:57.621 "rw_ios_per_sec": 0, 00:19:57.621 "rw_mbytes_per_sec": 0, 00:19:57.621 "r_mbytes_per_sec": 0, 00:19:57.621 "w_mbytes_per_sec": 0 00:19:57.621 }, 00:19:57.621 "claimed": false, 00:19:57.621 "zoned": false, 00:19:57.621 "supported_io_types": { 00:19:57.621 "read": true, 00:19:57.621 "write": true, 00:19:57.621 "unmap": true, 00:19:57.621 "flush": true, 00:19:57.621 "reset": true, 00:19:57.621 "nvme_admin": false, 00:19:57.621 "nvme_io": false, 00:19:57.621 "nvme_io_md": false, 00:19:57.621 "write_zeroes": true, 00:19:57.621 "zcopy": true, 00:19:57.621 "get_zone_info": false, 00:19:57.621 "zone_management": false, 00:19:57.621 "zone_append": false, 00:19:57.621 "compare": false, 00:19:57.621 "compare_and_write": false, 00:19:57.621 "abort": true, 00:19:57.621 "seek_hole": false, 00:19:57.621 "seek_data": false, 00:19:57.621 "copy": true, 00:19:57.621 "nvme_iov_md": false 00:19:57.621 }, 00:19:57.621 "memory_domains": [ 00:19:57.621 { 00:19:57.621 "dma_device_id": "system", 00:19:57.621 "dma_device_type": 1 00:19:57.621 }, 00:19:57.621 { 00:19:57.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.621 "dma_device_type": 2 00:19:57.622 } 00:19:57.622 ], 00:19:57.622 "driver_specific": {} 00:19:57.622 } 00:19:57.622 ] 00:19:57.622 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:57.622 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:57.622 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:57.622 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:57.880 BaseBdev3 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.880 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.138 10:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:58.397 [ 00:19:58.397 { 00:19:58.397 "name": "BaseBdev3", 00:19:58.397 "aliases": [ 00:19:58.397 "64a6c015-23bf-41e5-8bdc-185e8141970c" 00:19:58.397 ], 00:19:58.397 "product_name": "Malloc disk", 00:19:58.397 "block_size": 512, 00:19:58.397 "num_blocks": 65536, 00:19:58.397 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:19:58.397 "assigned_rate_limits": { 00:19:58.397 "rw_ios_per_sec": 0, 00:19:58.397 "rw_mbytes_per_sec": 0, 00:19:58.397 "r_mbytes_per_sec": 0, 00:19:58.397 "w_mbytes_per_sec": 0 00:19:58.397 }, 00:19:58.397 "claimed": false, 00:19:58.397 "zoned": false, 00:19:58.397 "supported_io_types": { 00:19:58.397 "read": true, 00:19:58.397 "write": true, 00:19:58.397 "unmap": true, 00:19:58.397 "flush": true, 00:19:58.397 "reset": true, 00:19:58.397 "nvme_admin": false, 00:19:58.397 "nvme_io": false, 00:19:58.397 "nvme_io_md": false, 00:19:58.397 "write_zeroes": true, 00:19:58.397 "zcopy": true, 00:19:58.397 "get_zone_info": false, 00:19:58.397 "zone_management": false, 00:19:58.397 "zone_append": false, 00:19:58.397 "compare": false, 00:19:58.397 "compare_and_write": false, 00:19:58.397 "abort": true, 00:19:58.397 "seek_hole": false, 00:19:58.397 "seek_data": false, 00:19:58.397 "copy": true, 00:19:58.397 "nvme_iov_md": false 00:19:58.397 }, 00:19:58.397 "memory_domains": [ 00:19:58.397 { 00:19:58.397 "dma_device_id": "system", 00:19:58.397 "dma_device_type": 1 00:19:58.397 }, 00:19:58.397 { 00:19:58.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.397 "dma_device_type": 2 00:19:58.397 } 00:19:58.397 ], 00:19:58.397 "driver_specific": {} 00:19:58.397 } 00:19:58.397 ] 00:19:58.397 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:58.397 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:58.397 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:58.397 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:58.656 BaseBdev4 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:58.656 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.915 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:58.915 [ 00:19:58.915 { 00:19:58.915 "name": "BaseBdev4", 00:19:58.915 "aliases": [ 00:19:58.915 "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1" 00:19:58.915 ], 00:19:58.915 "product_name": "Malloc disk", 00:19:58.915 "block_size": 512, 00:19:58.915 "num_blocks": 65536, 00:19:58.915 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:19:58.915 "assigned_rate_limits": { 00:19:58.915 "rw_ios_per_sec": 0, 00:19:58.915 "rw_mbytes_per_sec": 0, 00:19:58.915 "r_mbytes_per_sec": 0, 00:19:58.915 "w_mbytes_per_sec": 0 00:19:58.915 }, 00:19:58.915 "claimed": false, 00:19:58.915 "zoned": false, 00:19:58.915 "supported_io_types": { 00:19:58.915 "read": true, 00:19:58.915 "write": true, 00:19:58.915 "unmap": true, 00:19:58.915 "flush": true, 00:19:58.915 "reset": true, 00:19:58.915 "nvme_admin": false, 00:19:58.915 "nvme_io": false, 00:19:58.915 "nvme_io_md": false, 00:19:58.915 "write_zeroes": true, 00:19:58.915 "zcopy": true, 00:19:58.915 "get_zone_info": false, 00:19:58.915 "zone_management": false, 00:19:58.915 "zone_append": false, 00:19:58.915 "compare": false, 00:19:58.915 "compare_and_write": false, 00:19:58.915 "abort": true, 00:19:58.915 "seek_hole": false, 00:19:58.915 "seek_data": false, 00:19:58.915 "copy": true, 00:19:58.915 "nvme_iov_md": false 00:19:58.915 }, 00:19:58.915 "memory_domains": [ 00:19:58.915 { 00:19:58.915 "dma_device_id": "system", 00:19:58.915 "dma_device_type": 1 00:19:58.915 }, 00:19:58.915 { 00:19:58.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.915 "dma_device_type": 2 00:19:58.915 } 00:19:58.915 ], 00:19:58.915 "driver_specific": {} 00:19:58.915 } 00:19:58.915 ] 00:19:58.915 10:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:58.915 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:58.915 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:58.915 10:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:59.174 [2024-07-26 10:30:12.001839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.174 [2024-07-26 10:30:12.001878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.174 [2024-07-26 10:30:12.001897] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.174 [2024-07-26 10:30:12.003097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:59.174 [2024-07-26 10:30:12.003137] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.174 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.433 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.433 "name": "Existed_Raid", 00:19:59.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.433 "strip_size_kb": 64, 00:19:59.433 "state": "configuring", 00:19:59.433 "raid_level": "raid0", 00:19:59.433 "superblock": false, 00:19:59.433 "num_base_bdevs": 4, 00:19:59.433 "num_base_bdevs_discovered": 3, 00:19:59.433 "num_base_bdevs_operational": 4, 00:19:59.433 "base_bdevs_list": [ 00:19:59.433 { 00:19:59.433 "name": "BaseBdev1", 00:19:59.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.433 "is_configured": false, 00:19:59.433 "data_offset": 0, 00:19:59.433 "data_size": 0 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "name": "BaseBdev2", 00:19:59.433 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:19:59.433 "is_configured": true, 00:19:59.433 "data_offset": 0, 00:19:59.433 "data_size": 65536 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "name": "BaseBdev3", 00:19:59.433 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:19:59.433 "is_configured": true, 00:19:59.433 "data_offset": 0, 00:19:59.433 "data_size": 65536 00:19:59.433 }, 00:19:59.433 { 00:19:59.433 "name": "BaseBdev4", 00:19:59.433 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:19:59.433 "is_configured": true, 00:19:59.433 "data_offset": 0, 00:19:59.433 "data_size": 65536 00:19:59.433 } 00:19:59.433 ] 00:19:59.433 }' 00:19:59.433 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.433 10:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.000 10:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:00.258 [2024-07-26 10:30:13.004452] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.258 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.516 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.516 "name": "Existed_Raid", 00:20:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.516 "strip_size_kb": 64, 00:20:00.516 "state": "configuring", 00:20:00.516 "raid_level": "raid0", 00:20:00.516 "superblock": false, 00:20:00.516 "num_base_bdevs": 4, 00:20:00.516 "num_base_bdevs_discovered": 2, 00:20:00.516 "num_base_bdevs_operational": 4, 00:20:00.516 "base_bdevs_list": [ 00:20:00.516 { 00:20:00.516 "name": "BaseBdev1", 00:20:00.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.516 "is_configured": false, 00:20:00.516 "data_offset": 0, 00:20:00.516 "data_size": 0 00:20:00.516 }, 00:20:00.516 { 00:20:00.516 "name": null, 00:20:00.516 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:00.516 "is_configured": false, 00:20:00.516 "data_offset": 0, 00:20:00.516 "data_size": 65536 00:20:00.516 }, 00:20:00.516 { 00:20:00.516 "name": "BaseBdev3", 00:20:00.516 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:00.516 "is_configured": true, 00:20:00.516 "data_offset": 0, 00:20:00.516 "data_size": 65536 00:20:00.516 }, 00:20:00.516 { 00:20:00.516 "name": "BaseBdev4", 00:20:00.516 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:00.516 "is_configured": true, 00:20:00.516 "data_offset": 0, 00:20:00.516 "data_size": 65536 00:20:00.516 } 00:20:00.516 ] 00:20:00.516 }' 00:20:00.516 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.516 10:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.084 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.084 10:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:01.343 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:01.343 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:01.602 [2024-07-26 10:30:14.258822] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.602 BaseBdev1 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:01.602 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.861 [ 00:20:01.861 { 00:20:01.861 "name": "BaseBdev1", 00:20:01.861 "aliases": [ 00:20:01.861 "78df6055-82cf-46ca-93ea-a1ac68252f10" 00:20:01.861 ], 00:20:01.861 "product_name": "Malloc disk", 00:20:01.861 "block_size": 512, 00:20:01.861 "num_blocks": 65536, 00:20:01.861 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:01.861 "assigned_rate_limits": { 00:20:01.861 "rw_ios_per_sec": 0, 00:20:01.861 "rw_mbytes_per_sec": 0, 00:20:01.861 "r_mbytes_per_sec": 0, 00:20:01.861 "w_mbytes_per_sec": 0 00:20:01.861 }, 00:20:01.861 "claimed": true, 00:20:01.861 "claim_type": "exclusive_write", 00:20:01.861 "zoned": false, 00:20:01.861 "supported_io_types": { 00:20:01.861 "read": true, 00:20:01.861 "write": true, 00:20:01.861 "unmap": true, 00:20:01.861 "flush": true, 00:20:01.861 "reset": true, 00:20:01.861 "nvme_admin": false, 00:20:01.861 "nvme_io": false, 00:20:01.861 "nvme_io_md": false, 00:20:01.861 "write_zeroes": true, 00:20:01.861 "zcopy": true, 00:20:01.861 "get_zone_info": false, 00:20:01.861 "zone_management": false, 00:20:01.861 "zone_append": false, 00:20:01.861 "compare": false, 00:20:01.861 "compare_and_write": false, 00:20:01.861 "abort": true, 00:20:01.861 "seek_hole": false, 00:20:01.861 "seek_data": false, 00:20:01.861 "copy": true, 00:20:01.861 "nvme_iov_md": false 00:20:01.861 }, 00:20:01.861 "memory_domains": [ 00:20:01.861 { 00:20:01.861 "dma_device_id": "system", 00:20:01.861 "dma_device_type": 1 00:20:01.861 }, 00:20:01.861 { 00:20:01.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.861 "dma_device_type": 2 00:20:01.861 } 00:20:01.861 ], 00:20:01.861 "driver_specific": {} 00:20:01.861 } 00:20:01.861 ] 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.861 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.121 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.121 "name": "Existed_Raid", 00:20:02.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.121 "strip_size_kb": 64, 00:20:02.121 "state": "configuring", 00:20:02.121 "raid_level": "raid0", 00:20:02.121 "superblock": false, 00:20:02.121 "num_base_bdevs": 4, 00:20:02.121 "num_base_bdevs_discovered": 3, 00:20:02.121 "num_base_bdevs_operational": 4, 00:20:02.121 "base_bdevs_list": [ 00:20:02.121 { 00:20:02.121 "name": "BaseBdev1", 00:20:02.121 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:02.121 "is_configured": true, 00:20:02.121 "data_offset": 0, 00:20:02.121 "data_size": 65536 00:20:02.121 }, 00:20:02.121 { 00:20:02.121 "name": null, 00:20:02.121 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:02.121 "is_configured": false, 00:20:02.121 "data_offset": 0, 00:20:02.121 "data_size": 65536 00:20:02.121 }, 00:20:02.121 { 00:20:02.121 "name": "BaseBdev3", 00:20:02.121 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:02.121 "is_configured": true, 00:20:02.121 "data_offset": 0, 00:20:02.121 "data_size": 65536 00:20:02.121 }, 00:20:02.121 { 00:20:02.121 "name": "BaseBdev4", 00:20:02.121 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:02.121 "is_configured": true, 00:20:02.121 "data_offset": 0, 00:20:02.121 "data_size": 65536 00:20:02.121 } 00:20:02.121 ] 00:20:02.121 }' 00:20:02.121 10:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.121 10:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.689 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.689 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:02.948 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:02.948 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:03.208 [2024-07-26 10:30:15.919445] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.208 10:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.467 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.468 "name": "Existed_Raid", 00:20:03.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.468 "strip_size_kb": 64, 00:20:03.468 "state": "configuring", 00:20:03.468 "raid_level": "raid0", 00:20:03.468 "superblock": false, 00:20:03.468 "num_base_bdevs": 4, 00:20:03.468 "num_base_bdevs_discovered": 2, 00:20:03.468 "num_base_bdevs_operational": 4, 00:20:03.468 "base_bdevs_list": [ 00:20:03.468 { 00:20:03.468 "name": "BaseBdev1", 00:20:03.468 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:03.468 "is_configured": true, 00:20:03.468 "data_offset": 0, 00:20:03.468 "data_size": 65536 00:20:03.468 }, 00:20:03.468 { 00:20:03.468 "name": null, 00:20:03.468 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:03.468 "is_configured": false, 00:20:03.468 "data_offset": 0, 00:20:03.468 "data_size": 65536 00:20:03.468 }, 00:20:03.468 { 00:20:03.468 "name": null, 00:20:03.468 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:03.468 "is_configured": false, 00:20:03.468 "data_offset": 0, 00:20:03.468 "data_size": 65536 00:20:03.468 }, 00:20:03.468 { 00:20:03.468 "name": "BaseBdev4", 00:20:03.468 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:03.468 "is_configured": true, 00:20:03.468 "data_offset": 0, 00:20:03.468 "data_size": 65536 00:20:03.468 } 00:20:03.468 ] 00:20:03.468 }' 00:20:03.468 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.468 10:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.036 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.036 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:04.296 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:04.296 10:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:04.296 [2024-07-26 10:30:17.190801] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.555 "name": "Existed_Raid", 00:20:04.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.555 "strip_size_kb": 64, 00:20:04.555 "state": "configuring", 00:20:04.555 "raid_level": "raid0", 00:20:04.555 "superblock": false, 00:20:04.555 "num_base_bdevs": 4, 00:20:04.555 "num_base_bdevs_discovered": 3, 00:20:04.555 "num_base_bdevs_operational": 4, 00:20:04.555 "base_bdevs_list": [ 00:20:04.555 { 00:20:04.555 "name": "BaseBdev1", 00:20:04.555 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:04.555 "is_configured": true, 00:20:04.555 "data_offset": 0, 00:20:04.555 "data_size": 65536 00:20:04.555 }, 00:20:04.555 { 00:20:04.555 "name": null, 00:20:04.555 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:04.555 "is_configured": false, 00:20:04.555 "data_offset": 0, 00:20:04.555 "data_size": 65536 00:20:04.555 }, 00:20:04.555 { 00:20:04.555 "name": "BaseBdev3", 00:20:04.555 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:04.555 "is_configured": true, 00:20:04.555 "data_offset": 0, 00:20:04.555 "data_size": 65536 00:20:04.555 }, 00:20:04.555 { 00:20:04.555 "name": "BaseBdev4", 00:20:04.555 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:04.555 "is_configured": true, 00:20:04.555 "data_offset": 0, 00:20:04.555 "data_size": 65536 00:20:04.555 } 00:20:04.555 ] 00:20:04.555 }' 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.555 10:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.124 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.124 10:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:05.383 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:05.383 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:05.643 [2024-07-26 10:30:18.426097] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.643 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.934 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.934 "name": "Existed_Raid", 00:20:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.934 "strip_size_kb": 64, 00:20:05.934 "state": "configuring", 00:20:05.934 "raid_level": "raid0", 00:20:05.934 "superblock": false, 00:20:05.934 "num_base_bdevs": 4, 00:20:05.934 "num_base_bdevs_discovered": 2, 00:20:05.934 "num_base_bdevs_operational": 4, 00:20:05.934 "base_bdevs_list": [ 00:20:05.934 { 00:20:05.934 "name": null, 00:20:05.934 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:05.934 "is_configured": false, 00:20:05.934 "data_offset": 0, 00:20:05.934 "data_size": 65536 00:20:05.934 }, 00:20:05.934 { 00:20:05.934 "name": null, 00:20:05.934 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:05.934 "is_configured": false, 00:20:05.934 "data_offset": 0, 00:20:05.934 "data_size": 65536 00:20:05.934 }, 00:20:05.934 { 00:20:05.934 "name": "BaseBdev3", 00:20:05.934 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:05.934 "is_configured": true, 00:20:05.934 "data_offset": 0, 00:20:05.934 "data_size": 65536 00:20:05.934 }, 00:20:05.934 { 00:20:05.934 "name": "BaseBdev4", 00:20:05.934 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:05.934 "is_configured": true, 00:20:05.934 "data_offset": 0, 00:20:05.934 "data_size": 65536 00:20:05.934 } 00:20:05.934 ] 00:20:05.934 }' 00:20:05.934 10:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.934 10:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.501 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.501 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:06.760 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:06.760 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:07.020 [2024-07-26 10:30:19.695373] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.020 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.279 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.279 "name": "Existed_Raid", 00:20:07.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.280 "strip_size_kb": 64, 00:20:07.280 "state": "configuring", 00:20:07.280 "raid_level": "raid0", 00:20:07.280 "superblock": false, 00:20:07.280 "num_base_bdevs": 4, 00:20:07.280 "num_base_bdevs_discovered": 3, 00:20:07.280 "num_base_bdevs_operational": 4, 00:20:07.280 "base_bdevs_list": [ 00:20:07.280 { 00:20:07.280 "name": null, 00:20:07.280 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:07.280 "is_configured": false, 00:20:07.280 "data_offset": 0, 00:20:07.280 "data_size": 65536 00:20:07.280 }, 00:20:07.280 { 00:20:07.280 "name": "BaseBdev2", 00:20:07.280 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:07.280 "is_configured": true, 00:20:07.280 "data_offset": 0, 00:20:07.280 "data_size": 65536 00:20:07.280 }, 00:20:07.280 { 00:20:07.280 "name": "BaseBdev3", 00:20:07.280 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:07.280 "is_configured": true, 00:20:07.280 "data_offset": 0, 00:20:07.280 "data_size": 65536 00:20:07.280 }, 00:20:07.280 { 00:20:07.280 "name": "BaseBdev4", 00:20:07.280 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:07.280 "is_configured": true, 00:20:07.280 "data_offset": 0, 00:20:07.280 "data_size": 65536 00:20:07.280 } 00:20:07.280 ] 00:20:07.280 }' 00:20:07.280 10:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.280 10:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.848 10:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.848 10:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:08.107 10:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:08.107 10:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:08.107 10:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.107 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 78df6055-82cf-46ca-93ea-a1ac68252f10 00:20:08.366 [2024-07-26 10:30:21.174434] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:08.366 [2024-07-26 10:30:21.174468] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xacaf80 00:20:08.366 [2024-07-26 10:30:21.174476] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:08.366 [2024-07-26 10:30:21.174655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xacfb20 00:20:08.366 [2024-07-26 10:30:21.174760] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xacaf80 00:20:08.366 [2024-07-26 10:30:21.174769] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xacaf80 00:20:08.366 [2024-07-26 10:30:21.174915] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.366 NewBaseBdev 00:20:08.366 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:08.366 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:08.367 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:08.367 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:08.367 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:08.367 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:08.367 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.626 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:08.885 [ 00:20:08.885 { 00:20:08.885 "name": "NewBaseBdev", 00:20:08.885 "aliases": [ 00:20:08.885 "78df6055-82cf-46ca-93ea-a1ac68252f10" 00:20:08.885 ], 00:20:08.885 "product_name": "Malloc disk", 00:20:08.885 "block_size": 512, 00:20:08.885 "num_blocks": 65536, 00:20:08.885 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:08.885 "assigned_rate_limits": { 00:20:08.885 "rw_ios_per_sec": 0, 00:20:08.885 "rw_mbytes_per_sec": 0, 00:20:08.885 "r_mbytes_per_sec": 0, 00:20:08.885 "w_mbytes_per_sec": 0 00:20:08.885 }, 00:20:08.885 "claimed": true, 00:20:08.885 "claim_type": "exclusive_write", 00:20:08.885 "zoned": false, 00:20:08.885 "supported_io_types": { 00:20:08.885 "read": true, 00:20:08.885 "write": true, 00:20:08.885 "unmap": true, 00:20:08.885 "flush": true, 00:20:08.885 "reset": true, 00:20:08.885 "nvme_admin": false, 00:20:08.885 "nvme_io": false, 00:20:08.885 "nvme_io_md": false, 00:20:08.885 "write_zeroes": true, 00:20:08.885 "zcopy": true, 00:20:08.885 "get_zone_info": false, 00:20:08.885 "zone_management": false, 00:20:08.885 "zone_append": false, 00:20:08.885 "compare": false, 00:20:08.885 "compare_and_write": false, 00:20:08.885 "abort": true, 00:20:08.885 "seek_hole": false, 00:20:08.885 "seek_data": false, 00:20:08.885 "copy": true, 00:20:08.885 "nvme_iov_md": false 00:20:08.885 }, 00:20:08.885 "memory_domains": [ 00:20:08.885 { 00:20:08.885 "dma_device_id": "system", 00:20:08.885 "dma_device_type": 1 00:20:08.885 }, 00:20:08.885 { 00:20:08.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.885 "dma_device_type": 2 00:20:08.885 } 00:20:08.885 ], 00:20:08.885 "driver_specific": {} 00:20:08.885 } 00:20:08.885 ] 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.885 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.144 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.144 "name": "Existed_Raid", 00:20:09.144 "uuid": "91214f83-2fc6-41f8-9422-8fc34da4a879", 00:20:09.144 "strip_size_kb": 64, 00:20:09.144 "state": "online", 00:20:09.145 "raid_level": "raid0", 00:20:09.145 "superblock": false, 00:20:09.145 "num_base_bdevs": 4, 00:20:09.145 "num_base_bdevs_discovered": 4, 00:20:09.145 "num_base_bdevs_operational": 4, 00:20:09.145 "base_bdevs_list": [ 00:20:09.145 { 00:20:09.145 "name": "NewBaseBdev", 00:20:09.145 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:09.145 "is_configured": true, 00:20:09.145 "data_offset": 0, 00:20:09.145 "data_size": 65536 00:20:09.145 }, 00:20:09.145 { 00:20:09.145 "name": "BaseBdev2", 00:20:09.145 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:09.145 "is_configured": true, 00:20:09.145 "data_offset": 0, 00:20:09.145 "data_size": 65536 00:20:09.145 }, 00:20:09.145 { 00:20:09.145 "name": "BaseBdev3", 00:20:09.145 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:09.145 "is_configured": true, 00:20:09.145 "data_offset": 0, 00:20:09.145 "data_size": 65536 00:20:09.145 }, 00:20:09.145 { 00:20:09.145 "name": "BaseBdev4", 00:20:09.145 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:09.145 "is_configured": true, 00:20:09.145 "data_offset": 0, 00:20:09.145 "data_size": 65536 00:20:09.145 } 00:20:09.145 ] 00:20:09.145 }' 00:20:09.145 10:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.145 10:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:09.713 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:09.972 [2024-07-26 10:30:22.630591] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.972 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:09.972 "name": "Existed_Raid", 00:20:09.972 "aliases": [ 00:20:09.972 "91214f83-2fc6-41f8-9422-8fc34da4a879" 00:20:09.972 ], 00:20:09.972 "product_name": "Raid Volume", 00:20:09.972 "block_size": 512, 00:20:09.972 "num_blocks": 262144, 00:20:09.972 "uuid": "91214f83-2fc6-41f8-9422-8fc34da4a879", 00:20:09.972 "assigned_rate_limits": { 00:20:09.972 "rw_ios_per_sec": 0, 00:20:09.972 "rw_mbytes_per_sec": 0, 00:20:09.972 "r_mbytes_per_sec": 0, 00:20:09.972 "w_mbytes_per_sec": 0 00:20:09.972 }, 00:20:09.972 "claimed": false, 00:20:09.972 "zoned": false, 00:20:09.972 "supported_io_types": { 00:20:09.973 "read": true, 00:20:09.973 "write": true, 00:20:09.973 "unmap": true, 00:20:09.973 "flush": true, 00:20:09.973 "reset": true, 00:20:09.973 "nvme_admin": false, 00:20:09.973 "nvme_io": false, 00:20:09.973 "nvme_io_md": false, 00:20:09.973 "write_zeroes": true, 00:20:09.973 "zcopy": false, 00:20:09.973 "get_zone_info": false, 00:20:09.973 "zone_management": false, 00:20:09.973 "zone_append": false, 00:20:09.973 "compare": false, 00:20:09.973 "compare_and_write": false, 00:20:09.973 "abort": false, 00:20:09.973 "seek_hole": false, 00:20:09.973 "seek_data": false, 00:20:09.973 "copy": false, 00:20:09.973 "nvme_iov_md": false 00:20:09.973 }, 00:20:09.973 "memory_domains": [ 00:20:09.973 { 00:20:09.973 "dma_device_id": "system", 00:20:09.973 "dma_device_type": 1 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.973 "dma_device_type": 2 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "system", 00:20:09.973 "dma_device_type": 1 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.973 "dma_device_type": 2 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "system", 00:20:09.973 "dma_device_type": 1 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.973 "dma_device_type": 2 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "system", 00:20:09.973 "dma_device_type": 1 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.973 "dma_device_type": 2 00:20:09.973 } 00:20:09.973 ], 00:20:09.973 "driver_specific": { 00:20:09.973 "raid": { 00:20:09.973 "uuid": "91214f83-2fc6-41f8-9422-8fc34da4a879", 00:20:09.973 "strip_size_kb": 64, 00:20:09.973 "state": "online", 00:20:09.973 "raid_level": "raid0", 00:20:09.973 "superblock": false, 00:20:09.973 "num_base_bdevs": 4, 00:20:09.973 "num_base_bdevs_discovered": 4, 00:20:09.973 "num_base_bdevs_operational": 4, 00:20:09.973 "base_bdevs_list": [ 00:20:09.973 { 00:20:09.973 "name": "NewBaseBdev", 00:20:09.973 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:09.973 "is_configured": true, 00:20:09.973 "data_offset": 0, 00:20:09.973 "data_size": 65536 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "name": "BaseBdev2", 00:20:09.973 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:09.973 "is_configured": true, 00:20:09.973 "data_offset": 0, 00:20:09.973 "data_size": 65536 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "name": "BaseBdev3", 00:20:09.973 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:09.973 "is_configured": true, 00:20:09.973 "data_offset": 0, 00:20:09.973 "data_size": 65536 00:20:09.973 }, 00:20:09.973 { 00:20:09.973 "name": "BaseBdev4", 00:20:09.973 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:09.973 "is_configured": true, 00:20:09.973 "data_offset": 0, 00:20:09.973 "data_size": 65536 00:20:09.973 } 00:20:09.973 ] 00:20:09.973 } 00:20:09.973 } 00:20:09.973 }' 00:20:09.973 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.973 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:09.973 BaseBdev2 00:20:09.973 BaseBdev3 00:20:09.973 BaseBdev4' 00:20:09.973 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.973 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:09.973 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.232 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.232 "name": "NewBaseBdev", 00:20:10.232 "aliases": [ 00:20:10.232 "78df6055-82cf-46ca-93ea-a1ac68252f10" 00:20:10.232 ], 00:20:10.232 "product_name": "Malloc disk", 00:20:10.232 "block_size": 512, 00:20:10.232 "num_blocks": 65536, 00:20:10.232 "uuid": "78df6055-82cf-46ca-93ea-a1ac68252f10", 00:20:10.232 "assigned_rate_limits": { 00:20:10.232 "rw_ios_per_sec": 0, 00:20:10.232 "rw_mbytes_per_sec": 0, 00:20:10.232 "r_mbytes_per_sec": 0, 00:20:10.232 "w_mbytes_per_sec": 0 00:20:10.232 }, 00:20:10.232 "claimed": true, 00:20:10.232 "claim_type": "exclusive_write", 00:20:10.232 "zoned": false, 00:20:10.232 "supported_io_types": { 00:20:10.232 "read": true, 00:20:10.232 "write": true, 00:20:10.232 "unmap": true, 00:20:10.232 "flush": true, 00:20:10.232 "reset": true, 00:20:10.232 "nvme_admin": false, 00:20:10.232 "nvme_io": false, 00:20:10.232 "nvme_io_md": false, 00:20:10.232 "write_zeroes": true, 00:20:10.232 "zcopy": true, 00:20:10.232 "get_zone_info": false, 00:20:10.232 "zone_management": false, 00:20:10.232 "zone_append": false, 00:20:10.232 "compare": false, 00:20:10.232 "compare_and_write": false, 00:20:10.232 "abort": true, 00:20:10.232 "seek_hole": false, 00:20:10.232 "seek_data": false, 00:20:10.232 "copy": true, 00:20:10.232 "nvme_iov_md": false 00:20:10.232 }, 00:20:10.232 "memory_domains": [ 00:20:10.232 { 00:20:10.232 "dma_device_id": "system", 00:20:10.232 "dma_device_type": 1 00:20:10.232 }, 00:20:10.232 { 00:20:10.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.232 "dma_device_type": 2 00:20:10.232 } 00:20:10.232 ], 00:20:10.232 "driver_specific": {} 00:20:10.232 }' 00:20:10.232 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.232 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.232 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.232 10:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.232 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.232 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.232 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.232 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:10.491 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.748 "name": "BaseBdev2", 00:20:10.748 "aliases": [ 00:20:10.748 "e5396796-1912-4503-b068-ff026556c646" 00:20:10.748 ], 00:20:10.748 "product_name": "Malloc disk", 00:20:10.748 "block_size": 512, 00:20:10.748 "num_blocks": 65536, 00:20:10.748 "uuid": "e5396796-1912-4503-b068-ff026556c646", 00:20:10.748 "assigned_rate_limits": { 00:20:10.748 "rw_ios_per_sec": 0, 00:20:10.748 "rw_mbytes_per_sec": 0, 00:20:10.748 "r_mbytes_per_sec": 0, 00:20:10.748 "w_mbytes_per_sec": 0 00:20:10.748 }, 00:20:10.748 "claimed": true, 00:20:10.748 "claim_type": "exclusive_write", 00:20:10.748 "zoned": false, 00:20:10.748 "supported_io_types": { 00:20:10.748 "read": true, 00:20:10.748 "write": true, 00:20:10.748 "unmap": true, 00:20:10.748 "flush": true, 00:20:10.748 "reset": true, 00:20:10.748 "nvme_admin": false, 00:20:10.748 "nvme_io": false, 00:20:10.748 "nvme_io_md": false, 00:20:10.748 "write_zeroes": true, 00:20:10.748 "zcopy": true, 00:20:10.748 "get_zone_info": false, 00:20:10.748 "zone_management": false, 00:20:10.748 "zone_append": false, 00:20:10.748 "compare": false, 00:20:10.748 "compare_and_write": false, 00:20:10.748 "abort": true, 00:20:10.748 "seek_hole": false, 00:20:10.748 "seek_data": false, 00:20:10.748 "copy": true, 00:20:10.748 "nvme_iov_md": false 00:20:10.748 }, 00:20:10.748 "memory_domains": [ 00:20:10.748 { 00:20:10.748 "dma_device_id": "system", 00:20:10.748 "dma_device_type": 1 00:20:10.748 }, 00:20:10.748 { 00:20:10.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.748 "dma_device_type": 2 00:20:10.748 } 00:20:10.748 ], 00:20:10.748 "driver_specific": {} 00:20:10.748 }' 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.748 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.006 "name": "BaseBdev3", 00:20:11.006 "aliases": [ 00:20:11.006 "64a6c015-23bf-41e5-8bdc-185e8141970c" 00:20:11.006 ], 00:20:11.006 "product_name": "Malloc disk", 00:20:11.006 "block_size": 512, 00:20:11.006 "num_blocks": 65536, 00:20:11.006 "uuid": "64a6c015-23bf-41e5-8bdc-185e8141970c", 00:20:11.006 "assigned_rate_limits": { 00:20:11.006 "rw_ios_per_sec": 0, 00:20:11.006 "rw_mbytes_per_sec": 0, 00:20:11.006 "r_mbytes_per_sec": 0, 00:20:11.006 "w_mbytes_per_sec": 0 00:20:11.006 }, 00:20:11.006 "claimed": true, 00:20:11.006 "claim_type": "exclusive_write", 00:20:11.006 "zoned": false, 00:20:11.006 "supported_io_types": { 00:20:11.006 "read": true, 00:20:11.006 "write": true, 00:20:11.006 "unmap": true, 00:20:11.006 "flush": true, 00:20:11.006 "reset": true, 00:20:11.006 "nvme_admin": false, 00:20:11.006 "nvme_io": false, 00:20:11.006 "nvme_io_md": false, 00:20:11.006 "write_zeroes": true, 00:20:11.006 "zcopy": true, 00:20:11.006 "get_zone_info": false, 00:20:11.006 "zone_management": false, 00:20:11.006 "zone_append": false, 00:20:11.006 "compare": false, 00:20:11.006 "compare_and_write": false, 00:20:11.006 "abort": true, 00:20:11.006 "seek_hole": false, 00:20:11.006 "seek_data": false, 00:20:11.006 "copy": true, 00:20:11.006 "nvme_iov_md": false 00:20:11.006 }, 00:20:11.006 "memory_domains": [ 00:20:11.006 { 00:20:11.006 "dma_device_id": "system", 00:20:11.006 "dma_device_type": 1 00:20:11.006 }, 00:20:11.006 { 00:20:11.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.006 "dma_device_type": 2 00:20:11.006 } 00:20:11.006 ], 00:20:11.006 "driver_specific": {} 00:20:11.006 }' 00:20:11.006 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.264 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.264 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.264 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.264 10:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.264 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.522 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.522 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.522 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.522 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:11.523 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.523 "name": "BaseBdev4", 00:20:11.523 "aliases": [ 00:20:11.523 "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1" 00:20:11.523 ], 00:20:11.523 "product_name": "Malloc disk", 00:20:11.523 "block_size": 512, 00:20:11.523 "num_blocks": 65536, 00:20:11.523 "uuid": "e7d6fd6b-cc40-40a9-ae53-81d569bc26a1", 00:20:11.523 "assigned_rate_limits": { 00:20:11.523 "rw_ios_per_sec": 0, 00:20:11.523 "rw_mbytes_per_sec": 0, 00:20:11.523 "r_mbytes_per_sec": 0, 00:20:11.523 "w_mbytes_per_sec": 0 00:20:11.523 }, 00:20:11.523 "claimed": true, 00:20:11.523 "claim_type": "exclusive_write", 00:20:11.523 "zoned": false, 00:20:11.523 "supported_io_types": { 00:20:11.523 "read": true, 00:20:11.523 "write": true, 00:20:11.523 "unmap": true, 00:20:11.523 "flush": true, 00:20:11.523 "reset": true, 00:20:11.523 "nvme_admin": false, 00:20:11.523 "nvme_io": false, 00:20:11.523 "nvme_io_md": false, 00:20:11.523 "write_zeroes": true, 00:20:11.523 "zcopy": true, 00:20:11.523 "get_zone_info": false, 00:20:11.523 "zone_management": false, 00:20:11.523 "zone_append": false, 00:20:11.523 "compare": false, 00:20:11.523 "compare_and_write": false, 00:20:11.523 "abort": true, 00:20:11.523 "seek_hole": false, 00:20:11.523 "seek_data": false, 00:20:11.523 "copy": true, 00:20:11.523 "nvme_iov_md": false 00:20:11.523 }, 00:20:11.523 "memory_domains": [ 00:20:11.523 { 00:20:11.523 "dma_device_id": "system", 00:20:11.523 "dma_device_type": 1 00:20:11.523 }, 00:20:11.523 { 00:20:11.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.523 "dma_device_type": 2 00:20:11.523 } 00:20:11.523 ], 00:20:11.523 "driver_specific": {} 00:20:11.523 }' 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.781 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.039 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.039 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.039 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:12.299 [2024-07-26 10:30:24.964453] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.299 [2024-07-26 10:30:24.964477] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.299 [2024-07-26 10:30:24.964525] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.299 [2024-07-26 10:30:24.964579] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.299 [2024-07-26 10:30:24.964589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xacaf80 name Existed_Raid, state offline 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3418200 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3418200 ']' 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3418200 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.299 10:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3418200 00:20:12.299 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:12.299 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:12.299 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3418200' 00:20:12.299 killing process with pid 3418200 00:20:12.299 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3418200 00:20:12.299 [2024-07-26 10:30:25.045412] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.299 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3418200 00:20:12.299 [2024-07-26 10:30:25.076376] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:12.559 00:20:12.559 real 0m30.262s 00:20:12.559 user 0m55.589s 00:20:12.559 sys 0m5.468s 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 ************************************ 00:20:12.559 END TEST raid_state_function_test 00:20:12.559 ************************************ 00:20:12.559 10:30:25 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:12.559 10:30:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:12.559 10:30:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.559 10:30:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 ************************************ 00:20:12.559 START TEST raid_state_function_test_sb 00:20:12.559 ************************************ 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3424434 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3424434' 00:20:12.559 Process raid pid: 3424434 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3424434 /var/tmp/spdk-raid.sock 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3424434 ']' 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:12.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.559 10:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 [2024-07-26 10:30:25.399910] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:20:12.559 [2024-07-26 10:30:25.399967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:12.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.559 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:12.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:12.819 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:12.819 [2024-07-26 10:30:25.524652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.819 [2024-07-26 10:30:25.567854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.819 [2024-07-26 10:30:25.631707] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.819 [2024-07-26 10:30:25.631742] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.387 10:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.387 10:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:13.387 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:13.647 [2024-07-26 10:30:26.456204] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:13.647 [2024-07-26 10:30:26.456244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:13.647 [2024-07-26 10:30:26.456254] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.647 [2024-07-26 10:30:26.456265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.647 [2024-07-26 10:30:26.456273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.647 [2024-07-26 10:30:26.456283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.647 [2024-07-26 10:30:26.456291] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:13.647 [2024-07-26 10:30:26.456308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.647 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.906 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.906 "name": "Existed_Raid", 00:20:13.906 "uuid": "15b6b0c1-4adf-498f-a4cf-06ba7cf52dd3", 00:20:13.906 "strip_size_kb": 64, 00:20:13.906 "state": "configuring", 00:20:13.906 "raid_level": "raid0", 00:20:13.906 "superblock": true, 00:20:13.906 "num_base_bdevs": 4, 00:20:13.906 "num_base_bdevs_discovered": 0, 00:20:13.906 "num_base_bdevs_operational": 4, 00:20:13.906 "base_bdevs_list": [ 00:20:13.906 { 00:20:13.906 "name": "BaseBdev1", 00:20:13.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.906 "is_configured": false, 00:20:13.906 "data_offset": 0, 00:20:13.906 "data_size": 0 00:20:13.906 }, 00:20:13.906 { 00:20:13.906 "name": "BaseBdev2", 00:20:13.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.906 "is_configured": false, 00:20:13.906 "data_offset": 0, 00:20:13.906 "data_size": 0 00:20:13.906 }, 00:20:13.906 { 00:20:13.906 "name": "BaseBdev3", 00:20:13.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.906 "is_configured": false, 00:20:13.906 "data_offset": 0, 00:20:13.906 "data_size": 0 00:20:13.906 }, 00:20:13.906 { 00:20:13.906 "name": "BaseBdev4", 00:20:13.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.906 "is_configured": false, 00:20:13.906 "data_offset": 0, 00:20:13.906 "data_size": 0 00:20:13.906 } 00:20:13.906 ] 00:20:13.906 }' 00:20:13.906 10:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.906 10:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.474 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:14.733 [2024-07-26 10:30:27.490789] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:14.733 [2024-07-26 10:30:27.490822] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bebb70 name Existed_Raid, state configuring 00:20:14.733 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:14.992 [2024-07-26 10:30:27.719419] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:14.992 [2024-07-26 10:30:27.719449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:14.992 [2024-07-26 10:30:27.719458] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:14.992 [2024-07-26 10:30:27.719469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:14.992 [2024-07-26 10:30:27.719477] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:14.992 [2024-07-26 10:30:27.719487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:14.992 [2024-07-26 10:30:27.719495] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:14.992 [2024-07-26 10:30:27.719505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:14.992 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:15.252 [2024-07-26 10:30:27.953402] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.252 BaseBdev1 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:15.252 10:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.511 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.511 [ 00:20:15.511 { 00:20:15.511 "name": "BaseBdev1", 00:20:15.511 "aliases": [ 00:20:15.511 "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed" 00:20:15.511 ], 00:20:15.511 "product_name": "Malloc disk", 00:20:15.511 "block_size": 512, 00:20:15.511 "num_blocks": 65536, 00:20:15.511 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:15.511 "assigned_rate_limits": { 00:20:15.511 "rw_ios_per_sec": 0, 00:20:15.511 "rw_mbytes_per_sec": 0, 00:20:15.511 "r_mbytes_per_sec": 0, 00:20:15.511 "w_mbytes_per_sec": 0 00:20:15.511 }, 00:20:15.511 "claimed": true, 00:20:15.511 "claim_type": "exclusive_write", 00:20:15.511 "zoned": false, 00:20:15.511 "supported_io_types": { 00:20:15.511 "read": true, 00:20:15.511 "write": true, 00:20:15.511 "unmap": true, 00:20:15.511 "flush": true, 00:20:15.511 "reset": true, 00:20:15.511 "nvme_admin": false, 00:20:15.511 "nvme_io": false, 00:20:15.511 "nvme_io_md": false, 00:20:15.511 "write_zeroes": true, 00:20:15.511 "zcopy": true, 00:20:15.511 "get_zone_info": false, 00:20:15.511 "zone_management": false, 00:20:15.511 "zone_append": false, 00:20:15.511 "compare": false, 00:20:15.511 "compare_and_write": false, 00:20:15.511 "abort": true, 00:20:15.511 "seek_hole": false, 00:20:15.511 "seek_data": false, 00:20:15.511 "copy": true, 00:20:15.511 "nvme_iov_md": false 00:20:15.511 }, 00:20:15.511 "memory_domains": [ 00:20:15.511 { 00:20:15.511 "dma_device_id": "system", 00:20:15.511 "dma_device_type": 1 00:20:15.511 }, 00:20:15.511 { 00:20:15.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.511 "dma_device_type": 2 00:20:15.511 } 00:20:15.511 ], 00:20:15.511 "driver_specific": {} 00:20:15.511 } 00:20:15.511 ] 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.770 "name": "Existed_Raid", 00:20:15.770 "uuid": "0b1f1eeb-5007-415c-9284-f5a205f209b3", 00:20:15.770 "strip_size_kb": 64, 00:20:15.770 "state": "configuring", 00:20:15.770 "raid_level": "raid0", 00:20:15.770 "superblock": true, 00:20:15.770 "num_base_bdevs": 4, 00:20:15.770 "num_base_bdevs_discovered": 1, 00:20:15.770 "num_base_bdevs_operational": 4, 00:20:15.770 "base_bdevs_list": [ 00:20:15.770 { 00:20:15.770 "name": "BaseBdev1", 00:20:15.770 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:15.770 "is_configured": true, 00:20:15.770 "data_offset": 2048, 00:20:15.770 "data_size": 63488 00:20:15.770 }, 00:20:15.770 { 00:20:15.770 "name": "BaseBdev2", 00:20:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.770 "is_configured": false, 00:20:15.770 "data_offset": 0, 00:20:15.770 "data_size": 0 00:20:15.770 }, 00:20:15.770 { 00:20:15.770 "name": "BaseBdev3", 00:20:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.770 "is_configured": false, 00:20:15.770 "data_offset": 0, 00:20:15.770 "data_size": 0 00:20:15.770 }, 00:20:15.770 { 00:20:15.770 "name": "BaseBdev4", 00:20:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.770 "is_configured": false, 00:20:15.770 "data_offset": 0, 00:20:15.770 "data_size": 0 00:20:15.770 } 00:20:15.770 ] 00:20:15.770 }' 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.770 10:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.708 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:16.708 [2024-07-26 10:30:29.469409] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.708 [2024-07-26 10:30:29.469450] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1beb4a0 name Existed_Raid, state configuring 00:20:16.708 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:16.967 [2024-07-26 10:30:29.698049] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.967 [2024-07-26 10:30:29.699451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.967 [2024-07-26 10:30:29.699484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.967 [2024-07-26 10:30:29.699494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:16.967 [2024-07-26 10:30:29.699505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:16.967 [2024-07-26 10:30:29.699513] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:16.967 [2024-07-26 10:30:29.699524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.967 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.226 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.226 "name": "Existed_Raid", 00:20:17.226 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:17.227 "strip_size_kb": 64, 00:20:17.227 "state": "configuring", 00:20:17.227 "raid_level": "raid0", 00:20:17.227 "superblock": true, 00:20:17.227 "num_base_bdevs": 4, 00:20:17.227 "num_base_bdevs_discovered": 1, 00:20:17.227 "num_base_bdevs_operational": 4, 00:20:17.227 "base_bdevs_list": [ 00:20:17.227 { 00:20:17.227 "name": "BaseBdev1", 00:20:17.227 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:17.227 "is_configured": true, 00:20:17.227 "data_offset": 2048, 00:20:17.227 "data_size": 63488 00:20:17.227 }, 00:20:17.227 { 00:20:17.227 "name": "BaseBdev2", 00:20:17.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.227 "is_configured": false, 00:20:17.227 "data_offset": 0, 00:20:17.227 "data_size": 0 00:20:17.227 }, 00:20:17.227 { 00:20:17.227 "name": "BaseBdev3", 00:20:17.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.227 "is_configured": false, 00:20:17.227 "data_offset": 0, 00:20:17.227 "data_size": 0 00:20:17.227 }, 00:20:17.227 { 00:20:17.227 "name": "BaseBdev4", 00:20:17.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.227 "is_configured": false, 00:20:17.227 "data_offset": 0, 00:20:17.227 "data_size": 0 00:20:17.227 } 00:20:17.227 ] 00:20:17.227 }' 00:20:17.227 10:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.227 10:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.828 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:18.085 [2024-07-26 10:30:30.755881] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.085 BaseBdev2 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:18.085 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:18.343 10:30:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:18.343 [ 00:20:18.343 { 00:20:18.343 "name": "BaseBdev2", 00:20:18.343 "aliases": [ 00:20:18.343 "f6d3f3f1-43fa-430e-bfcb-618453300452" 00:20:18.343 ], 00:20:18.343 "product_name": "Malloc disk", 00:20:18.343 "block_size": 512, 00:20:18.343 "num_blocks": 65536, 00:20:18.343 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:18.343 "assigned_rate_limits": { 00:20:18.343 "rw_ios_per_sec": 0, 00:20:18.343 "rw_mbytes_per_sec": 0, 00:20:18.343 "r_mbytes_per_sec": 0, 00:20:18.343 "w_mbytes_per_sec": 0 00:20:18.343 }, 00:20:18.343 "claimed": true, 00:20:18.343 "claim_type": "exclusive_write", 00:20:18.343 "zoned": false, 00:20:18.343 "supported_io_types": { 00:20:18.343 "read": true, 00:20:18.343 "write": true, 00:20:18.343 "unmap": true, 00:20:18.343 "flush": true, 00:20:18.343 "reset": true, 00:20:18.343 "nvme_admin": false, 00:20:18.343 "nvme_io": false, 00:20:18.343 "nvme_io_md": false, 00:20:18.343 "write_zeroes": true, 00:20:18.343 "zcopy": true, 00:20:18.343 "get_zone_info": false, 00:20:18.343 "zone_management": false, 00:20:18.343 "zone_append": false, 00:20:18.343 "compare": false, 00:20:18.343 "compare_and_write": false, 00:20:18.343 "abort": true, 00:20:18.343 "seek_hole": false, 00:20:18.343 "seek_data": false, 00:20:18.343 "copy": true, 00:20:18.343 "nvme_iov_md": false 00:20:18.343 }, 00:20:18.343 "memory_domains": [ 00:20:18.343 { 00:20:18.343 "dma_device_id": "system", 00:20:18.343 "dma_device_type": 1 00:20:18.343 }, 00:20:18.343 { 00:20:18.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.343 "dma_device_type": 2 00:20:18.343 } 00:20:18.343 ], 00:20:18.343 "driver_specific": {} 00:20:18.343 } 00:20:18.343 ] 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.343 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.601 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.601 "name": "Existed_Raid", 00:20:18.601 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:18.601 "strip_size_kb": 64, 00:20:18.601 "state": "configuring", 00:20:18.601 "raid_level": "raid0", 00:20:18.601 "superblock": true, 00:20:18.602 "num_base_bdevs": 4, 00:20:18.602 "num_base_bdevs_discovered": 2, 00:20:18.602 "num_base_bdevs_operational": 4, 00:20:18.602 "base_bdevs_list": [ 00:20:18.602 { 00:20:18.602 "name": "BaseBdev1", 00:20:18.602 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:18.602 "is_configured": true, 00:20:18.602 "data_offset": 2048, 00:20:18.602 "data_size": 63488 00:20:18.602 }, 00:20:18.602 { 00:20:18.602 "name": "BaseBdev2", 00:20:18.602 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:18.602 "is_configured": true, 00:20:18.602 "data_offset": 2048, 00:20:18.602 "data_size": 63488 00:20:18.602 }, 00:20:18.602 { 00:20:18.602 "name": "BaseBdev3", 00:20:18.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.602 "is_configured": false, 00:20:18.602 "data_offset": 0, 00:20:18.602 "data_size": 0 00:20:18.602 }, 00:20:18.602 { 00:20:18.602 "name": "BaseBdev4", 00:20:18.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.602 "is_configured": false, 00:20:18.602 "data_offset": 0, 00:20:18.602 "data_size": 0 00:20:18.602 } 00:20:18.602 ] 00:20:18.602 }' 00:20:18.602 10:30:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.602 10:30:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.169 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:19.428 [2024-07-26 10:30:32.238908] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.428 BaseBdev3 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:19.428 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.686 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:19.945 [ 00:20:19.945 { 00:20:19.945 "name": "BaseBdev3", 00:20:19.945 "aliases": [ 00:20:19.945 "3075dcbe-a954-4c95-97e7-92a3530243c3" 00:20:19.945 ], 00:20:19.945 "product_name": "Malloc disk", 00:20:19.945 "block_size": 512, 00:20:19.945 "num_blocks": 65536, 00:20:19.945 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:19.945 "assigned_rate_limits": { 00:20:19.945 "rw_ios_per_sec": 0, 00:20:19.945 "rw_mbytes_per_sec": 0, 00:20:19.945 "r_mbytes_per_sec": 0, 00:20:19.945 "w_mbytes_per_sec": 0 00:20:19.945 }, 00:20:19.945 "claimed": true, 00:20:19.945 "claim_type": "exclusive_write", 00:20:19.945 "zoned": false, 00:20:19.945 "supported_io_types": { 00:20:19.945 "read": true, 00:20:19.945 "write": true, 00:20:19.945 "unmap": true, 00:20:19.945 "flush": true, 00:20:19.945 "reset": true, 00:20:19.945 "nvme_admin": false, 00:20:19.945 "nvme_io": false, 00:20:19.945 "nvme_io_md": false, 00:20:19.945 "write_zeroes": true, 00:20:19.945 "zcopy": true, 00:20:19.945 "get_zone_info": false, 00:20:19.945 "zone_management": false, 00:20:19.945 "zone_append": false, 00:20:19.945 "compare": false, 00:20:19.945 "compare_and_write": false, 00:20:19.945 "abort": true, 00:20:19.945 "seek_hole": false, 00:20:19.945 "seek_data": false, 00:20:19.945 "copy": true, 00:20:19.945 "nvme_iov_md": false 00:20:19.945 }, 00:20:19.945 "memory_domains": [ 00:20:19.945 { 00:20:19.945 "dma_device_id": "system", 00:20:19.945 "dma_device_type": 1 00:20:19.945 }, 00:20:19.945 { 00:20:19.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.945 "dma_device_type": 2 00:20:19.945 } 00:20:19.945 ], 00:20:19.945 "driver_specific": {} 00:20:19.945 } 00:20:19.945 ] 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.945 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.204 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.204 "name": "Existed_Raid", 00:20:20.204 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:20.204 "strip_size_kb": 64, 00:20:20.204 "state": "configuring", 00:20:20.204 "raid_level": "raid0", 00:20:20.204 "superblock": true, 00:20:20.204 "num_base_bdevs": 4, 00:20:20.204 "num_base_bdevs_discovered": 3, 00:20:20.204 "num_base_bdevs_operational": 4, 00:20:20.204 "base_bdevs_list": [ 00:20:20.204 { 00:20:20.204 "name": "BaseBdev1", 00:20:20.204 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:20.204 "is_configured": true, 00:20:20.204 "data_offset": 2048, 00:20:20.204 "data_size": 63488 00:20:20.204 }, 00:20:20.204 { 00:20:20.204 "name": "BaseBdev2", 00:20:20.204 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:20.204 "is_configured": true, 00:20:20.204 "data_offset": 2048, 00:20:20.204 "data_size": 63488 00:20:20.204 }, 00:20:20.204 { 00:20:20.204 "name": "BaseBdev3", 00:20:20.204 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:20.204 "is_configured": true, 00:20:20.204 "data_offset": 2048, 00:20:20.204 "data_size": 63488 00:20:20.204 }, 00:20:20.204 { 00:20:20.204 "name": "BaseBdev4", 00:20:20.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.204 "is_configured": false, 00:20:20.204 "data_offset": 0, 00:20:20.204 "data_size": 0 00:20:20.204 } 00:20:20.204 ] 00:20:20.204 }' 00:20:20.204 10:30:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.204 10:30:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:21.029 [2024-07-26 10:30:33.702104] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:21.029 [2024-07-26 10:30:33.702274] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d9e3e0 00:20:21.029 [2024-07-26 10:30:33.702288] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:21.029 [2024-07-26 10:30:33.702450] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bd81d0 00:20:21.029 [2024-07-26 10:30:33.702558] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d9e3e0 00:20:21.029 [2024-07-26 10:30:33.702567] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d9e3e0 00:20:21.029 [2024-07-26 10:30:33.702653] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.029 BaseBdev4 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:21.029 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:21.287 10:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:21.287 [ 00:20:21.287 { 00:20:21.287 "name": "BaseBdev4", 00:20:21.287 "aliases": [ 00:20:21.287 "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9" 00:20:21.287 ], 00:20:21.287 "product_name": "Malloc disk", 00:20:21.287 "block_size": 512, 00:20:21.287 "num_blocks": 65536, 00:20:21.287 "uuid": "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9", 00:20:21.287 "assigned_rate_limits": { 00:20:21.287 "rw_ios_per_sec": 0, 00:20:21.287 "rw_mbytes_per_sec": 0, 00:20:21.287 "r_mbytes_per_sec": 0, 00:20:21.287 "w_mbytes_per_sec": 0 00:20:21.287 }, 00:20:21.287 "claimed": true, 00:20:21.287 "claim_type": "exclusive_write", 00:20:21.287 "zoned": false, 00:20:21.287 "supported_io_types": { 00:20:21.287 "read": true, 00:20:21.287 "write": true, 00:20:21.287 "unmap": true, 00:20:21.287 "flush": true, 00:20:21.287 "reset": true, 00:20:21.287 "nvme_admin": false, 00:20:21.287 "nvme_io": false, 00:20:21.287 "nvme_io_md": false, 00:20:21.287 "write_zeroes": true, 00:20:21.287 "zcopy": true, 00:20:21.287 "get_zone_info": false, 00:20:21.287 "zone_management": false, 00:20:21.287 "zone_append": false, 00:20:21.287 "compare": false, 00:20:21.287 "compare_and_write": false, 00:20:21.287 "abort": true, 00:20:21.287 "seek_hole": false, 00:20:21.287 "seek_data": false, 00:20:21.287 "copy": true, 00:20:21.287 "nvme_iov_md": false 00:20:21.287 }, 00:20:21.287 "memory_domains": [ 00:20:21.287 { 00:20:21.287 "dma_device_id": "system", 00:20:21.287 "dma_device_type": 1 00:20:21.287 }, 00:20:21.287 { 00:20:21.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.287 "dma_device_type": 2 00:20:21.287 } 00:20:21.287 ], 00:20:21.287 "driver_specific": {} 00:20:21.287 } 00:20:21.287 ] 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.287 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.546 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.546 "name": "Existed_Raid", 00:20:21.546 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:21.546 "strip_size_kb": 64, 00:20:21.546 "state": "online", 00:20:21.546 "raid_level": "raid0", 00:20:21.546 "superblock": true, 00:20:21.546 "num_base_bdevs": 4, 00:20:21.546 "num_base_bdevs_discovered": 4, 00:20:21.546 "num_base_bdevs_operational": 4, 00:20:21.546 "base_bdevs_list": [ 00:20:21.546 { 00:20:21.546 "name": "BaseBdev1", 00:20:21.546 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:21.546 "is_configured": true, 00:20:21.546 "data_offset": 2048, 00:20:21.546 "data_size": 63488 00:20:21.546 }, 00:20:21.546 { 00:20:21.546 "name": "BaseBdev2", 00:20:21.546 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:21.546 "is_configured": true, 00:20:21.546 "data_offset": 2048, 00:20:21.546 "data_size": 63488 00:20:21.546 }, 00:20:21.546 { 00:20:21.546 "name": "BaseBdev3", 00:20:21.546 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:21.546 "is_configured": true, 00:20:21.546 "data_offset": 2048, 00:20:21.546 "data_size": 63488 00:20:21.546 }, 00:20:21.546 { 00:20:21.546 "name": "BaseBdev4", 00:20:21.546 "uuid": "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9", 00:20:21.546 "is_configured": true, 00:20:21.546 "data_offset": 2048, 00:20:21.546 "data_size": 63488 00:20:21.546 } 00:20:21.546 ] 00:20:21.546 }' 00:20:21.546 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.546 10:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:22.112 10:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:22.371 [2024-07-26 10:30:35.130190] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:22.371 "name": "Existed_Raid", 00:20:22.371 "aliases": [ 00:20:22.371 "5b715a00-e296-4e75-abc9-62e01598e797" 00:20:22.371 ], 00:20:22.371 "product_name": "Raid Volume", 00:20:22.371 "block_size": 512, 00:20:22.371 "num_blocks": 253952, 00:20:22.371 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:22.371 "assigned_rate_limits": { 00:20:22.371 "rw_ios_per_sec": 0, 00:20:22.371 "rw_mbytes_per_sec": 0, 00:20:22.371 "r_mbytes_per_sec": 0, 00:20:22.371 "w_mbytes_per_sec": 0 00:20:22.371 }, 00:20:22.371 "claimed": false, 00:20:22.371 "zoned": false, 00:20:22.371 "supported_io_types": { 00:20:22.371 "read": true, 00:20:22.371 "write": true, 00:20:22.371 "unmap": true, 00:20:22.371 "flush": true, 00:20:22.371 "reset": true, 00:20:22.371 "nvme_admin": false, 00:20:22.371 "nvme_io": false, 00:20:22.371 "nvme_io_md": false, 00:20:22.371 "write_zeroes": true, 00:20:22.371 "zcopy": false, 00:20:22.371 "get_zone_info": false, 00:20:22.371 "zone_management": false, 00:20:22.371 "zone_append": false, 00:20:22.371 "compare": false, 00:20:22.371 "compare_and_write": false, 00:20:22.371 "abort": false, 00:20:22.371 "seek_hole": false, 00:20:22.371 "seek_data": false, 00:20:22.371 "copy": false, 00:20:22.371 "nvme_iov_md": false 00:20:22.371 }, 00:20:22.371 "memory_domains": [ 00:20:22.371 { 00:20:22.371 "dma_device_id": "system", 00:20:22.371 "dma_device_type": 1 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.371 "dma_device_type": 2 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "system", 00:20:22.371 "dma_device_type": 1 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.371 "dma_device_type": 2 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "system", 00:20:22.371 "dma_device_type": 1 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.371 "dma_device_type": 2 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "system", 00:20:22.371 "dma_device_type": 1 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.371 "dma_device_type": 2 00:20:22.371 } 00:20:22.371 ], 00:20:22.371 "driver_specific": { 00:20:22.371 "raid": { 00:20:22.371 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:22.371 "strip_size_kb": 64, 00:20:22.371 "state": "online", 00:20:22.371 "raid_level": "raid0", 00:20:22.371 "superblock": true, 00:20:22.371 "num_base_bdevs": 4, 00:20:22.371 "num_base_bdevs_discovered": 4, 00:20:22.371 "num_base_bdevs_operational": 4, 00:20:22.371 "base_bdevs_list": [ 00:20:22.371 { 00:20:22.371 "name": "BaseBdev1", 00:20:22.371 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:22.371 "is_configured": true, 00:20:22.371 "data_offset": 2048, 00:20:22.371 "data_size": 63488 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "name": "BaseBdev2", 00:20:22.371 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:22.371 "is_configured": true, 00:20:22.371 "data_offset": 2048, 00:20:22.371 "data_size": 63488 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "name": "BaseBdev3", 00:20:22.371 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:22.371 "is_configured": true, 00:20:22.371 "data_offset": 2048, 00:20:22.371 "data_size": 63488 00:20:22.371 }, 00:20:22.371 { 00:20:22.371 "name": "BaseBdev4", 00:20:22.371 "uuid": "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9", 00:20:22.371 "is_configured": true, 00:20:22.371 "data_offset": 2048, 00:20:22.371 "data_size": 63488 00:20:22.371 } 00:20:22.371 ] 00:20:22.371 } 00:20:22.371 } 00:20:22.371 }' 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:22.371 BaseBdev2 00:20:22.371 BaseBdev3 00:20:22.371 BaseBdev4' 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:22.371 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:22.630 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:22.630 "name": "BaseBdev1", 00:20:22.630 "aliases": [ 00:20:22.630 "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed" 00:20:22.630 ], 00:20:22.630 "product_name": "Malloc disk", 00:20:22.630 "block_size": 512, 00:20:22.630 "num_blocks": 65536, 00:20:22.630 "uuid": "c9bb8bf9-3f1a-46cd-aa96-d6c2ab8e50ed", 00:20:22.630 "assigned_rate_limits": { 00:20:22.630 "rw_ios_per_sec": 0, 00:20:22.630 "rw_mbytes_per_sec": 0, 00:20:22.630 "r_mbytes_per_sec": 0, 00:20:22.630 "w_mbytes_per_sec": 0 00:20:22.630 }, 00:20:22.630 "claimed": true, 00:20:22.630 "claim_type": "exclusive_write", 00:20:22.630 "zoned": false, 00:20:22.630 "supported_io_types": { 00:20:22.630 "read": true, 00:20:22.630 "write": true, 00:20:22.630 "unmap": true, 00:20:22.630 "flush": true, 00:20:22.630 "reset": true, 00:20:22.630 "nvme_admin": false, 00:20:22.630 "nvme_io": false, 00:20:22.630 "nvme_io_md": false, 00:20:22.630 "write_zeroes": true, 00:20:22.630 "zcopy": true, 00:20:22.630 "get_zone_info": false, 00:20:22.630 "zone_management": false, 00:20:22.630 "zone_append": false, 00:20:22.630 "compare": false, 00:20:22.630 "compare_and_write": false, 00:20:22.630 "abort": true, 00:20:22.630 "seek_hole": false, 00:20:22.630 "seek_data": false, 00:20:22.630 "copy": true, 00:20:22.630 "nvme_iov_md": false 00:20:22.630 }, 00:20:22.630 "memory_domains": [ 00:20:22.630 { 00:20:22.630 "dma_device_id": "system", 00:20:22.630 "dma_device_type": 1 00:20:22.630 }, 00:20:22.630 { 00:20:22.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.630 "dma_device_type": 2 00:20:22.630 } 00:20:22.630 ], 00:20:22.630 "driver_specific": {} 00:20:22.630 }' 00:20:22.630 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.630 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.630 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:22.630 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:22.887 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.146 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.146 "name": "BaseBdev2", 00:20:23.146 "aliases": [ 00:20:23.146 "f6d3f3f1-43fa-430e-bfcb-618453300452" 00:20:23.146 ], 00:20:23.146 "product_name": "Malloc disk", 00:20:23.146 "block_size": 512, 00:20:23.146 "num_blocks": 65536, 00:20:23.146 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:23.146 "assigned_rate_limits": { 00:20:23.146 "rw_ios_per_sec": 0, 00:20:23.146 "rw_mbytes_per_sec": 0, 00:20:23.146 "r_mbytes_per_sec": 0, 00:20:23.146 "w_mbytes_per_sec": 0 00:20:23.146 }, 00:20:23.146 "claimed": true, 00:20:23.146 "claim_type": "exclusive_write", 00:20:23.146 "zoned": false, 00:20:23.146 "supported_io_types": { 00:20:23.146 "read": true, 00:20:23.146 "write": true, 00:20:23.146 "unmap": true, 00:20:23.146 "flush": true, 00:20:23.146 "reset": true, 00:20:23.146 "nvme_admin": false, 00:20:23.146 "nvme_io": false, 00:20:23.146 "nvme_io_md": false, 00:20:23.146 "write_zeroes": true, 00:20:23.146 "zcopy": true, 00:20:23.146 "get_zone_info": false, 00:20:23.146 "zone_management": false, 00:20:23.146 "zone_append": false, 00:20:23.146 "compare": false, 00:20:23.146 "compare_and_write": false, 00:20:23.146 "abort": true, 00:20:23.146 "seek_hole": false, 00:20:23.146 "seek_data": false, 00:20:23.146 "copy": true, 00:20:23.146 "nvme_iov_md": false 00:20:23.146 }, 00:20:23.146 "memory_domains": [ 00:20:23.146 { 00:20:23.146 "dma_device_id": "system", 00:20:23.146 "dma_device_type": 1 00:20:23.146 }, 00:20:23.146 { 00:20:23.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.146 "dma_device_type": 2 00:20:23.146 } 00:20:23.146 ], 00:20:23.146 "driver_specific": {} 00:20:23.146 }' 00:20:23.146 10:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.146 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.405 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:23.664 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.664 "name": "BaseBdev3", 00:20:23.664 "aliases": [ 00:20:23.664 "3075dcbe-a954-4c95-97e7-92a3530243c3" 00:20:23.664 ], 00:20:23.664 "product_name": "Malloc disk", 00:20:23.664 "block_size": 512, 00:20:23.664 "num_blocks": 65536, 00:20:23.664 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:23.664 "assigned_rate_limits": { 00:20:23.664 "rw_ios_per_sec": 0, 00:20:23.664 "rw_mbytes_per_sec": 0, 00:20:23.664 "r_mbytes_per_sec": 0, 00:20:23.664 "w_mbytes_per_sec": 0 00:20:23.664 }, 00:20:23.664 "claimed": true, 00:20:23.664 "claim_type": "exclusive_write", 00:20:23.664 "zoned": false, 00:20:23.664 "supported_io_types": { 00:20:23.664 "read": true, 00:20:23.664 "write": true, 00:20:23.664 "unmap": true, 00:20:23.664 "flush": true, 00:20:23.664 "reset": true, 00:20:23.664 "nvme_admin": false, 00:20:23.664 "nvme_io": false, 00:20:23.664 "nvme_io_md": false, 00:20:23.664 "write_zeroes": true, 00:20:23.664 "zcopy": true, 00:20:23.664 "get_zone_info": false, 00:20:23.664 "zone_management": false, 00:20:23.664 "zone_append": false, 00:20:23.664 "compare": false, 00:20:23.664 "compare_and_write": false, 00:20:23.664 "abort": true, 00:20:23.664 "seek_hole": false, 00:20:23.664 "seek_data": false, 00:20:23.664 "copy": true, 00:20:23.664 "nvme_iov_md": false 00:20:23.664 }, 00:20:23.664 "memory_domains": [ 00:20:23.664 { 00:20:23.664 "dma_device_id": "system", 00:20:23.664 "dma_device_type": 1 00:20:23.664 }, 00:20:23.664 { 00:20:23.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.664 "dma_device_type": 2 00:20:23.664 } 00:20:23.664 ], 00:20:23.664 "driver_specific": {} 00:20:23.664 }' 00:20:23.664 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.664 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.923 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:24.183 "name": "BaseBdev4", 00:20:24.183 "aliases": [ 00:20:24.183 "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9" 00:20:24.183 ], 00:20:24.183 "product_name": "Malloc disk", 00:20:24.183 "block_size": 512, 00:20:24.183 "num_blocks": 65536, 00:20:24.183 "uuid": "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9", 00:20:24.183 "assigned_rate_limits": { 00:20:24.183 "rw_ios_per_sec": 0, 00:20:24.183 "rw_mbytes_per_sec": 0, 00:20:24.183 "r_mbytes_per_sec": 0, 00:20:24.183 "w_mbytes_per_sec": 0 00:20:24.183 }, 00:20:24.183 "claimed": true, 00:20:24.183 "claim_type": "exclusive_write", 00:20:24.183 "zoned": false, 00:20:24.183 "supported_io_types": { 00:20:24.183 "read": true, 00:20:24.183 "write": true, 00:20:24.183 "unmap": true, 00:20:24.183 "flush": true, 00:20:24.183 "reset": true, 00:20:24.183 "nvme_admin": false, 00:20:24.183 "nvme_io": false, 00:20:24.183 "nvme_io_md": false, 00:20:24.183 "write_zeroes": true, 00:20:24.183 "zcopy": true, 00:20:24.183 "get_zone_info": false, 00:20:24.183 "zone_management": false, 00:20:24.183 "zone_append": false, 00:20:24.183 "compare": false, 00:20:24.183 "compare_and_write": false, 00:20:24.183 "abort": true, 00:20:24.183 "seek_hole": false, 00:20:24.183 "seek_data": false, 00:20:24.183 "copy": true, 00:20:24.183 "nvme_iov_md": false 00:20:24.183 }, 00:20:24.183 "memory_domains": [ 00:20:24.183 { 00:20:24.183 "dma_device_id": "system", 00:20:24.183 "dma_device_type": 1 00:20:24.183 }, 00:20:24.183 { 00:20:24.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.183 "dma_device_type": 2 00:20:24.183 } 00:20:24.183 ], 00:20:24.183 "driver_specific": {} 00:20:24.183 }' 00:20:24.183 10:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.183 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:24.183 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:24.183 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:24.442 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:24.701 [2024-07-26 10:30:37.516209] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.701 [2024-07-26 10:30:37.516235] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.701 [2024-07-26 10:30:37.516279] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.701 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.702 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.702 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.702 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.702 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.961 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.961 "name": "Existed_Raid", 00:20:24.961 "uuid": "5b715a00-e296-4e75-abc9-62e01598e797", 00:20:24.961 "strip_size_kb": 64, 00:20:24.961 "state": "offline", 00:20:24.961 "raid_level": "raid0", 00:20:24.961 "superblock": true, 00:20:24.961 "num_base_bdevs": 4, 00:20:24.961 "num_base_bdevs_discovered": 3, 00:20:24.961 "num_base_bdevs_operational": 3, 00:20:24.961 "base_bdevs_list": [ 00:20:24.961 { 00:20:24.961 "name": null, 00:20:24.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.961 "is_configured": false, 00:20:24.961 "data_offset": 2048, 00:20:24.961 "data_size": 63488 00:20:24.961 }, 00:20:24.961 { 00:20:24.961 "name": "BaseBdev2", 00:20:24.961 "uuid": "f6d3f3f1-43fa-430e-bfcb-618453300452", 00:20:24.961 "is_configured": true, 00:20:24.961 "data_offset": 2048, 00:20:24.961 "data_size": 63488 00:20:24.961 }, 00:20:24.961 { 00:20:24.961 "name": "BaseBdev3", 00:20:24.961 "uuid": "3075dcbe-a954-4c95-97e7-92a3530243c3", 00:20:24.961 "is_configured": true, 00:20:24.961 "data_offset": 2048, 00:20:24.961 "data_size": 63488 00:20:24.961 }, 00:20:24.961 { 00:20:24.961 "name": "BaseBdev4", 00:20:24.961 "uuid": "394e8c6c-dcc4-4fde-8fd7-e521220ec3b9", 00:20:24.961 "is_configured": true, 00:20:24.961 "data_offset": 2048, 00:20:24.961 "data_size": 63488 00:20:24.961 } 00:20:24.961 ] 00:20:24.961 }' 00:20:24.961 10:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.961 10:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.530 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:25.530 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:25.530 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.530 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:26.098 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:26.098 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.098 10:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:26.358 [2024-07-26 10:30:39.013214] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.358 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:26.616 [2024-07-26 10:30:39.392123] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:26.616 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:26.616 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:26.616 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.616 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:26.875 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:26.875 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.875 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:27.134 [2024-07-26 10:30:39.843380] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:27.134 [2024-07-26 10:30:39.843426] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d9e3e0 name Existed_Raid, state offline 00:20:27.134 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:27.134 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:27.134 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:27.134 10:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:27.393 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:27.651 BaseBdev2 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:27.651 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.910 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.910 [ 00:20:27.910 { 00:20:27.910 "name": "BaseBdev2", 00:20:27.910 "aliases": [ 00:20:27.910 "37597e36-1373-477e-9cf4-827048afe63e" 00:20:27.910 ], 00:20:27.910 "product_name": "Malloc disk", 00:20:27.910 "block_size": 512, 00:20:27.910 "num_blocks": 65536, 00:20:27.910 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:27.910 "assigned_rate_limits": { 00:20:27.910 "rw_ios_per_sec": 0, 00:20:27.910 "rw_mbytes_per_sec": 0, 00:20:27.910 "r_mbytes_per_sec": 0, 00:20:27.910 "w_mbytes_per_sec": 0 00:20:27.910 }, 00:20:27.910 "claimed": false, 00:20:27.910 "zoned": false, 00:20:27.910 "supported_io_types": { 00:20:27.910 "read": true, 00:20:27.910 "write": true, 00:20:27.910 "unmap": true, 00:20:27.910 "flush": true, 00:20:27.910 "reset": true, 00:20:27.910 "nvme_admin": false, 00:20:27.910 "nvme_io": false, 00:20:27.910 "nvme_io_md": false, 00:20:27.910 "write_zeroes": true, 00:20:27.910 "zcopy": true, 00:20:27.910 "get_zone_info": false, 00:20:27.910 "zone_management": false, 00:20:27.910 "zone_append": false, 00:20:27.910 "compare": false, 00:20:27.910 "compare_and_write": false, 00:20:27.910 "abort": true, 00:20:27.910 "seek_hole": false, 00:20:27.910 "seek_data": false, 00:20:27.910 "copy": true, 00:20:27.910 "nvme_iov_md": false 00:20:27.910 }, 00:20:27.910 "memory_domains": [ 00:20:27.910 { 00:20:27.910 "dma_device_id": "system", 00:20:27.910 "dma_device_type": 1 00:20:27.910 }, 00:20:27.910 { 00:20:27.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.910 "dma_device_type": 2 00:20:27.910 } 00:20:27.910 ], 00:20:27.910 "driver_specific": {} 00:20:27.910 } 00:20:27.910 ] 00:20:27.910 10:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:27.910 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:27.910 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:27.910 10:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:28.169 BaseBdev3 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:28.169 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.428 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:28.687 [ 00:20:28.687 { 00:20:28.687 "name": "BaseBdev3", 00:20:28.687 "aliases": [ 00:20:28.687 "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463" 00:20:28.687 ], 00:20:28.687 "product_name": "Malloc disk", 00:20:28.687 "block_size": 512, 00:20:28.687 "num_blocks": 65536, 00:20:28.687 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:28.687 "assigned_rate_limits": { 00:20:28.687 "rw_ios_per_sec": 0, 00:20:28.687 "rw_mbytes_per_sec": 0, 00:20:28.687 "r_mbytes_per_sec": 0, 00:20:28.687 "w_mbytes_per_sec": 0 00:20:28.687 }, 00:20:28.687 "claimed": false, 00:20:28.687 "zoned": false, 00:20:28.687 "supported_io_types": { 00:20:28.687 "read": true, 00:20:28.687 "write": true, 00:20:28.687 "unmap": true, 00:20:28.687 "flush": true, 00:20:28.687 "reset": true, 00:20:28.687 "nvme_admin": false, 00:20:28.687 "nvme_io": false, 00:20:28.687 "nvme_io_md": false, 00:20:28.687 "write_zeroes": true, 00:20:28.687 "zcopy": true, 00:20:28.687 "get_zone_info": false, 00:20:28.687 "zone_management": false, 00:20:28.687 "zone_append": false, 00:20:28.687 "compare": false, 00:20:28.687 "compare_and_write": false, 00:20:28.687 "abort": true, 00:20:28.687 "seek_hole": false, 00:20:28.687 "seek_data": false, 00:20:28.687 "copy": true, 00:20:28.687 "nvme_iov_md": false 00:20:28.687 }, 00:20:28.687 "memory_domains": [ 00:20:28.687 { 00:20:28.687 "dma_device_id": "system", 00:20:28.687 "dma_device_type": 1 00:20:28.687 }, 00:20:28.687 { 00:20:28.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.687 "dma_device_type": 2 00:20:28.687 } 00:20:28.687 ], 00:20:28.687 "driver_specific": {} 00:20:28.687 } 00:20:28.687 ] 00:20:28.687 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:28.687 10:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:28.687 10:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:28.687 10:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:28.946 BaseBdev4 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:28.946 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.204 10:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:29.465 [ 00:20:29.465 { 00:20:29.465 "name": "BaseBdev4", 00:20:29.465 "aliases": [ 00:20:29.465 "5b8f6fd1-8efa-423b-8d6a-8cd98a823157" 00:20:29.465 ], 00:20:29.465 "product_name": "Malloc disk", 00:20:29.465 "block_size": 512, 00:20:29.465 "num_blocks": 65536, 00:20:29.465 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:29.465 "assigned_rate_limits": { 00:20:29.465 "rw_ios_per_sec": 0, 00:20:29.465 "rw_mbytes_per_sec": 0, 00:20:29.465 "r_mbytes_per_sec": 0, 00:20:29.465 "w_mbytes_per_sec": 0 00:20:29.465 }, 00:20:29.465 "claimed": false, 00:20:29.465 "zoned": false, 00:20:29.465 "supported_io_types": { 00:20:29.465 "read": true, 00:20:29.465 "write": true, 00:20:29.465 "unmap": true, 00:20:29.465 "flush": true, 00:20:29.465 "reset": true, 00:20:29.465 "nvme_admin": false, 00:20:29.465 "nvme_io": false, 00:20:29.465 "nvme_io_md": false, 00:20:29.465 "write_zeroes": true, 00:20:29.465 "zcopy": true, 00:20:29.465 "get_zone_info": false, 00:20:29.465 "zone_management": false, 00:20:29.465 "zone_append": false, 00:20:29.465 "compare": false, 00:20:29.465 "compare_and_write": false, 00:20:29.465 "abort": true, 00:20:29.465 "seek_hole": false, 00:20:29.465 "seek_data": false, 00:20:29.465 "copy": true, 00:20:29.465 "nvme_iov_md": false 00:20:29.465 }, 00:20:29.465 "memory_domains": [ 00:20:29.465 { 00:20:29.465 "dma_device_id": "system", 00:20:29.465 "dma_device_type": 1 00:20:29.465 }, 00:20:29.465 { 00:20:29.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.465 "dma_device_type": 2 00:20:29.465 } 00:20:29.465 ], 00:20:29.465 "driver_specific": {} 00:20:29.465 } 00:20:29.465 ] 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:29.465 [2024-07-26 10:30:42.325401] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.465 [2024-07-26 10:30:42.325440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.465 [2024-07-26 10:30:42.325459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.465 [2024-07-26 10:30:42.326656] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.465 [2024-07-26 10:30:42.326695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.465 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.756 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.756 "name": "Existed_Raid", 00:20:29.756 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:29.756 "strip_size_kb": 64, 00:20:29.756 "state": "configuring", 00:20:29.756 "raid_level": "raid0", 00:20:29.756 "superblock": true, 00:20:29.756 "num_base_bdevs": 4, 00:20:29.756 "num_base_bdevs_discovered": 3, 00:20:29.756 "num_base_bdevs_operational": 4, 00:20:29.756 "base_bdevs_list": [ 00:20:29.756 { 00:20:29.756 "name": "BaseBdev1", 00:20:29.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.756 "is_configured": false, 00:20:29.756 "data_offset": 0, 00:20:29.756 "data_size": 0 00:20:29.756 }, 00:20:29.756 { 00:20:29.756 "name": "BaseBdev2", 00:20:29.756 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:29.756 "is_configured": true, 00:20:29.756 "data_offset": 2048, 00:20:29.756 "data_size": 63488 00:20:29.756 }, 00:20:29.756 { 00:20:29.756 "name": "BaseBdev3", 00:20:29.756 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:29.756 "is_configured": true, 00:20:29.756 "data_offset": 2048, 00:20:29.756 "data_size": 63488 00:20:29.756 }, 00:20:29.756 { 00:20:29.756 "name": "BaseBdev4", 00:20:29.756 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:29.757 "is_configured": true, 00:20:29.757 "data_offset": 2048, 00:20:29.757 "data_size": 63488 00:20:29.757 } 00:20:29.757 ] 00:20:29.757 }' 00:20:29.757 10:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.757 10:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.323 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:30.581 [2024-07-26 10:30:43.247812] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:30.581 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:30.581 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.582 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.840 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.840 "name": "Existed_Raid", 00:20:30.840 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:30.840 "strip_size_kb": 64, 00:20:30.840 "state": "configuring", 00:20:30.840 "raid_level": "raid0", 00:20:30.840 "superblock": true, 00:20:30.840 "num_base_bdevs": 4, 00:20:30.840 "num_base_bdevs_discovered": 2, 00:20:30.840 "num_base_bdevs_operational": 4, 00:20:30.840 "base_bdevs_list": [ 00:20:30.840 { 00:20:30.840 "name": "BaseBdev1", 00:20:30.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.840 "is_configured": false, 00:20:30.840 "data_offset": 0, 00:20:30.840 "data_size": 0 00:20:30.840 }, 00:20:30.840 { 00:20:30.840 "name": null, 00:20:30.840 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:30.840 "is_configured": false, 00:20:30.840 "data_offset": 2048, 00:20:30.840 "data_size": 63488 00:20:30.840 }, 00:20:30.840 { 00:20:30.840 "name": "BaseBdev3", 00:20:30.840 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:30.840 "is_configured": true, 00:20:30.840 "data_offset": 2048, 00:20:30.840 "data_size": 63488 00:20:30.840 }, 00:20:30.840 { 00:20:30.840 "name": "BaseBdev4", 00:20:30.840 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:30.840 "is_configured": true, 00:20:30.840 "data_offset": 2048, 00:20:30.840 "data_size": 63488 00:20:30.840 } 00:20:30.840 ] 00:20:30.840 }' 00:20:30.840 10:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.840 10:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.406 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:31.406 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:31.406 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:31.664 [2024-07-26 10:30:44.506367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.664 BaseBdev1 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:31.664 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.924 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:32.183 [ 00:20:32.183 { 00:20:32.183 "name": "BaseBdev1", 00:20:32.183 "aliases": [ 00:20:32.183 "4516c302-8879-4aa4-9135-1e71a4b18a11" 00:20:32.183 ], 00:20:32.183 "product_name": "Malloc disk", 00:20:32.183 "block_size": 512, 00:20:32.183 "num_blocks": 65536, 00:20:32.183 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:32.183 "assigned_rate_limits": { 00:20:32.183 "rw_ios_per_sec": 0, 00:20:32.183 "rw_mbytes_per_sec": 0, 00:20:32.183 "r_mbytes_per_sec": 0, 00:20:32.183 "w_mbytes_per_sec": 0 00:20:32.183 }, 00:20:32.183 "claimed": true, 00:20:32.183 "claim_type": "exclusive_write", 00:20:32.183 "zoned": false, 00:20:32.183 "supported_io_types": { 00:20:32.183 "read": true, 00:20:32.183 "write": true, 00:20:32.183 "unmap": true, 00:20:32.183 "flush": true, 00:20:32.183 "reset": true, 00:20:32.183 "nvme_admin": false, 00:20:32.183 "nvme_io": false, 00:20:32.183 "nvme_io_md": false, 00:20:32.183 "write_zeroes": true, 00:20:32.183 "zcopy": true, 00:20:32.183 "get_zone_info": false, 00:20:32.183 "zone_management": false, 00:20:32.183 "zone_append": false, 00:20:32.183 "compare": false, 00:20:32.183 "compare_and_write": false, 00:20:32.183 "abort": true, 00:20:32.183 "seek_hole": false, 00:20:32.183 "seek_data": false, 00:20:32.183 "copy": true, 00:20:32.183 "nvme_iov_md": false 00:20:32.183 }, 00:20:32.183 "memory_domains": [ 00:20:32.183 { 00:20:32.183 "dma_device_id": "system", 00:20:32.184 "dma_device_type": 1 00:20:32.184 }, 00:20:32.184 { 00:20:32.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.184 "dma_device_type": 2 00:20:32.184 } 00:20:32.184 ], 00:20:32.184 "driver_specific": {} 00:20:32.184 } 00:20:32.184 ] 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.184 10:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.443 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.443 "name": "Existed_Raid", 00:20:32.443 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:32.443 "strip_size_kb": 64, 00:20:32.443 "state": "configuring", 00:20:32.443 "raid_level": "raid0", 00:20:32.443 "superblock": true, 00:20:32.443 "num_base_bdevs": 4, 00:20:32.443 "num_base_bdevs_discovered": 3, 00:20:32.443 "num_base_bdevs_operational": 4, 00:20:32.443 "base_bdevs_list": [ 00:20:32.443 { 00:20:32.443 "name": "BaseBdev1", 00:20:32.443 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:32.443 "is_configured": true, 00:20:32.443 "data_offset": 2048, 00:20:32.443 "data_size": 63488 00:20:32.443 }, 00:20:32.443 { 00:20:32.443 "name": null, 00:20:32.443 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:32.443 "is_configured": false, 00:20:32.443 "data_offset": 2048, 00:20:32.443 "data_size": 63488 00:20:32.443 }, 00:20:32.443 { 00:20:32.443 "name": "BaseBdev3", 00:20:32.443 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:32.443 "is_configured": true, 00:20:32.443 "data_offset": 2048, 00:20:32.443 "data_size": 63488 00:20:32.443 }, 00:20:32.443 { 00:20:32.443 "name": "BaseBdev4", 00:20:32.443 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:32.443 "is_configured": true, 00:20:32.443 "data_offset": 2048, 00:20:32.443 "data_size": 63488 00:20:32.443 } 00:20:32.443 ] 00:20:32.443 }' 00:20:32.443 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.443 10:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.010 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.010 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:33.268 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:33.268 10:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:33.527 [2024-07-26 10:30:46.186833] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.527 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.786 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.786 "name": "Existed_Raid", 00:20:33.786 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:33.786 "strip_size_kb": 64, 00:20:33.786 "state": "configuring", 00:20:33.786 "raid_level": "raid0", 00:20:33.786 "superblock": true, 00:20:33.786 "num_base_bdevs": 4, 00:20:33.786 "num_base_bdevs_discovered": 2, 00:20:33.786 "num_base_bdevs_operational": 4, 00:20:33.786 "base_bdevs_list": [ 00:20:33.786 { 00:20:33.786 "name": "BaseBdev1", 00:20:33.786 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:33.786 "is_configured": true, 00:20:33.786 "data_offset": 2048, 00:20:33.786 "data_size": 63488 00:20:33.786 }, 00:20:33.786 { 00:20:33.786 "name": null, 00:20:33.786 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:33.786 "is_configured": false, 00:20:33.786 "data_offset": 2048, 00:20:33.786 "data_size": 63488 00:20:33.786 }, 00:20:33.786 { 00:20:33.786 "name": null, 00:20:33.786 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:33.786 "is_configured": false, 00:20:33.786 "data_offset": 2048, 00:20:33.786 "data_size": 63488 00:20:33.786 }, 00:20:33.786 { 00:20:33.786 "name": "BaseBdev4", 00:20:33.786 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:33.786 "is_configured": true, 00:20:33.786 "data_offset": 2048, 00:20:33.786 "data_size": 63488 00:20:33.786 } 00:20:33.786 ] 00:20:33.786 }' 00:20:33.786 10:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.786 10:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.354 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.354 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:34.354 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:34.354 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:34.613 [2024-07-26 10:30:47.450200] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.613 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.614 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.614 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.873 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.873 "name": "Existed_Raid", 00:20:34.873 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:34.873 "strip_size_kb": 64, 00:20:34.873 "state": "configuring", 00:20:34.873 "raid_level": "raid0", 00:20:34.873 "superblock": true, 00:20:34.873 "num_base_bdevs": 4, 00:20:34.873 "num_base_bdevs_discovered": 3, 00:20:34.873 "num_base_bdevs_operational": 4, 00:20:34.873 "base_bdevs_list": [ 00:20:34.873 { 00:20:34.873 "name": "BaseBdev1", 00:20:34.873 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:34.873 "is_configured": true, 00:20:34.873 "data_offset": 2048, 00:20:34.873 "data_size": 63488 00:20:34.873 }, 00:20:34.873 { 00:20:34.873 "name": null, 00:20:34.873 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:34.873 "is_configured": false, 00:20:34.873 "data_offset": 2048, 00:20:34.873 "data_size": 63488 00:20:34.873 }, 00:20:34.873 { 00:20:34.873 "name": "BaseBdev3", 00:20:34.873 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:34.873 "is_configured": true, 00:20:34.873 "data_offset": 2048, 00:20:34.873 "data_size": 63488 00:20:34.873 }, 00:20:34.873 { 00:20:34.873 "name": "BaseBdev4", 00:20:34.873 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:34.873 "is_configured": true, 00:20:34.873 "data_offset": 2048, 00:20:34.873 "data_size": 63488 00:20:34.873 } 00:20:34.873 ] 00:20:34.873 }' 00:20:34.873 10:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.873 10:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.442 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.442 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:35.701 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:35.701 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:35.961 [2024-07-26 10:30:48.713533] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.961 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.220 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.220 "name": "Existed_Raid", 00:20:36.220 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:36.220 "strip_size_kb": 64, 00:20:36.220 "state": "configuring", 00:20:36.220 "raid_level": "raid0", 00:20:36.220 "superblock": true, 00:20:36.220 "num_base_bdevs": 4, 00:20:36.220 "num_base_bdevs_discovered": 2, 00:20:36.220 "num_base_bdevs_operational": 4, 00:20:36.220 "base_bdevs_list": [ 00:20:36.220 { 00:20:36.220 "name": null, 00:20:36.220 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:36.220 "is_configured": false, 00:20:36.220 "data_offset": 2048, 00:20:36.220 "data_size": 63488 00:20:36.220 }, 00:20:36.220 { 00:20:36.220 "name": null, 00:20:36.220 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:36.220 "is_configured": false, 00:20:36.220 "data_offset": 2048, 00:20:36.220 "data_size": 63488 00:20:36.220 }, 00:20:36.220 { 00:20:36.220 "name": "BaseBdev3", 00:20:36.220 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:36.220 "is_configured": true, 00:20:36.220 "data_offset": 2048, 00:20:36.220 "data_size": 63488 00:20:36.220 }, 00:20:36.220 { 00:20:36.220 "name": "BaseBdev4", 00:20:36.220 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:36.220 "is_configured": true, 00:20:36.220 "data_offset": 2048, 00:20:36.220 "data_size": 63488 00:20:36.220 } 00:20:36.220 ] 00:20:36.220 }' 00:20:36.220 10:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.220 10:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.789 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.789 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:37.047 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:37.047 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:37.306 [2024-07-26 10:30:49.967068] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:37.306 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.307 10:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.566 10:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.566 "name": "Existed_Raid", 00:20:37.566 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:37.566 "strip_size_kb": 64, 00:20:37.566 "state": "configuring", 00:20:37.566 "raid_level": "raid0", 00:20:37.566 "superblock": true, 00:20:37.566 "num_base_bdevs": 4, 00:20:37.566 "num_base_bdevs_discovered": 3, 00:20:37.566 "num_base_bdevs_operational": 4, 00:20:37.566 "base_bdevs_list": [ 00:20:37.566 { 00:20:37.566 "name": null, 00:20:37.566 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:37.566 "is_configured": false, 00:20:37.566 "data_offset": 2048, 00:20:37.566 "data_size": 63488 00:20:37.566 }, 00:20:37.566 { 00:20:37.566 "name": "BaseBdev2", 00:20:37.566 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:37.566 "is_configured": true, 00:20:37.566 "data_offset": 2048, 00:20:37.566 "data_size": 63488 00:20:37.566 }, 00:20:37.566 { 00:20:37.566 "name": "BaseBdev3", 00:20:37.566 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:37.566 "is_configured": true, 00:20:37.566 "data_offset": 2048, 00:20:37.566 "data_size": 63488 00:20:37.566 }, 00:20:37.566 { 00:20:37.566 "name": "BaseBdev4", 00:20:37.566 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:37.566 "is_configured": true, 00:20:37.566 "data_offset": 2048, 00:20:37.566 "data_size": 63488 00:20:37.566 } 00:20:37.566 ] 00:20:37.566 }' 00:20:37.566 10:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.566 10:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.135 10:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.135 10:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:38.135 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:38.135 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.135 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:38.395 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4516c302-8879-4aa4-9135-1e71a4b18a11 00:20:38.654 [2024-07-26 10:30:51.470088] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:38.654 [2024-07-26 10:30:51.470245] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be2cb0 00:20:38.654 [2024-07-26 10:30:51.470258] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:38.654 [2024-07-26 10:30:51.470422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bd81d0 00:20:38.654 [2024-07-26 10:30:51.470529] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be2cb0 00:20:38.654 [2024-07-26 10:30:51.470538] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1be2cb0 00:20:38.654 [2024-07-26 10:30:51.470620] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.654 NewBaseBdev 00:20:38.654 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:38.654 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:38.654 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:38.655 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:38.655 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:38.655 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:38.655 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.914 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:39.173 [ 00:20:39.173 { 00:20:39.173 "name": "NewBaseBdev", 00:20:39.173 "aliases": [ 00:20:39.173 "4516c302-8879-4aa4-9135-1e71a4b18a11" 00:20:39.173 ], 00:20:39.173 "product_name": "Malloc disk", 00:20:39.173 "block_size": 512, 00:20:39.173 "num_blocks": 65536, 00:20:39.173 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:39.173 "assigned_rate_limits": { 00:20:39.173 "rw_ios_per_sec": 0, 00:20:39.173 "rw_mbytes_per_sec": 0, 00:20:39.173 "r_mbytes_per_sec": 0, 00:20:39.173 "w_mbytes_per_sec": 0 00:20:39.173 }, 00:20:39.173 "claimed": true, 00:20:39.173 "claim_type": "exclusive_write", 00:20:39.173 "zoned": false, 00:20:39.173 "supported_io_types": { 00:20:39.173 "read": true, 00:20:39.173 "write": true, 00:20:39.173 "unmap": true, 00:20:39.173 "flush": true, 00:20:39.173 "reset": true, 00:20:39.173 "nvme_admin": false, 00:20:39.173 "nvme_io": false, 00:20:39.173 "nvme_io_md": false, 00:20:39.173 "write_zeroes": true, 00:20:39.173 "zcopy": true, 00:20:39.173 "get_zone_info": false, 00:20:39.173 "zone_management": false, 00:20:39.173 "zone_append": false, 00:20:39.173 "compare": false, 00:20:39.173 "compare_and_write": false, 00:20:39.173 "abort": true, 00:20:39.173 "seek_hole": false, 00:20:39.173 "seek_data": false, 00:20:39.173 "copy": true, 00:20:39.173 "nvme_iov_md": false 00:20:39.173 }, 00:20:39.173 "memory_domains": [ 00:20:39.173 { 00:20:39.173 "dma_device_id": "system", 00:20:39.173 "dma_device_type": 1 00:20:39.173 }, 00:20:39.173 { 00:20:39.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.173 "dma_device_type": 2 00:20:39.173 } 00:20:39.173 ], 00:20:39.173 "driver_specific": {} 00:20:39.173 } 00:20:39.173 ] 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.173 10:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.433 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.433 "name": "Existed_Raid", 00:20:39.433 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:39.433 "strip_size_kb": 64, 00:20:39.433 "state": "online", 00:20:39.433 "raid_level": "raid0", 00:20:39.433 "superblock": true, 00:20:39.433 "num_base_bdevs": 4, 00:20:39.433 "num_base_bdevs_discovered": 4, 00:20:39.433 "num_base_bdevs_operational": 4, 00:20:39.433 "base_bdevs_list": [ 00:20:39.433 { 00:20:39.433 "name": "NewBaseBdev", 00:20:39.433 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:39.433 "is_configured": true, 00:20:39.433 "data_offset": 2048, 00:20:39.433 "data_size": 63488 00:20:39.433 }, 00:20:39.433 { 00:20:39.433 "name": "BaseBdev2", 00:20:39.433 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:39.433 "is_configured": true, 00:20:39.433 "data_offset": 2048, 00:20:39.433 "data_size": 63488 00:20:39.433 }, 00:20:39.433 { 00:20:39.433 "name": "BaseBdev3", 00:20:39.433 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:39.433 "is_configured": true, 00:20:39.433 "data_offset": 2048, 00:20:39.433 "data_size": 63488 00:20:39.433 }, 00:20:39.433 { 00:20:39.433 "name": "BaseBdev4", 00:20:39.433 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:39.433 "is_configured": true, 00:20:39.433 "data_offset": 2048, 00:20:39.433 "data_size": 63488 00:20:39.433 } 00:20:39.433 ] 00:20:39.433 }' 00:20:39.433 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.433 10:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:40.002 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:40.261 [2024-07-26 10:30:52.930251] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.261 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:40.261 "name": "Existed_Raid", 00:20:40.261 "aliases": [ 00:20:40.261 "4f3163a8-4513-4be0-bbd1-e1a3e354654d" 00:20:40.261 ], 00:20:40.261 "product_name": "Raid Volume", 00:20:40.261 "block_size": 512, 00:20:40.261 "num_blocks": 253952, 00:20:40.261 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:40.261 "assigned_rate_limits": { 00:20:40.261 "rw_ios_per_sec": 0, 00:20:40.261 "rw_mbytes_per_sec": 0, 00:20:40.261 "r_mbytes_per_sec": 0, 00:20:40.261 "w_mbytes_per_sec": 0 00:20:40.261 }, 00:20:40.261 "claimed": false, 00:20:40.261 "zoned": false, 00:20:40.261 "supported_io_types": { 00:20:40.261 "read": true, 00:20:40.261 "write": true, 00:20:40.261 "unmap": true, 00:20:40.261 "flush": true, 00:20:40.261 "reset": true, 00:20:40.261 "nvme_admin": false, 00:20:40.261 "nvme_io": false, 00:20:40.261 "nvme_io_md": false, 00:20:40.261 "write_zeroes": true, 00:20:40.261 "zcopy": false, 00:20:40.261 "get_zone_info": false, 00:20:40.261 "zone_management": false, 00:20:40.261 "zone_append": false, 00:20:40.261 "compare": false, 00:20:40.261 "compare_and_write": false, 00:20:40.261 "abort": false, 00:20:40.261 "seek_hole": false, 00:20:40.261 "seek_data": false, 00:20:40.261 "copy": false, 00:20:40.261 "nvme_iov_md": false 00:20:40.261 }, 00:20:40.261 "memory_domains": [ 00:20:40.261 { 00:20:40.261 "dma_device_id": "system", 00:20:40.261 "dma_device_type": 1 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.261 "dma_device_type": 2 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "system", 00:20:40.261 "dma_device_type": 1 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.261 "dma_device_type": 2 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "system", 00:20:40.261 "dma_device_type": 1 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.261 "dma_device_type": 2 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "system", 00:20:40.261 "dma_device_type": 1 00:20:40.261 }, 00:20:40.261 { 00:20:40.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.261 "dma_device_type": 2 00:20:40.261 } 00:20:40.261 ], 00:20:40.261 "driver_specific": { 00:20:40.261 "raid": { 00:20:40.261 "uuid": "4f3163a8-4513-4be0-bbd1-e1a3e354654d", 00:20:40.261 "strip_size_kb": 64, 00:20:40.261 "state": "online", 00:20:40.261 "raid_level": "raid0", 00:20:40.261 "superblock": true, 00:20:40.261 "num_base_bdevs": 4, 00:20:40.261 "num_base_bdevs_discovered": 4, 00:20:40.261 "num_base_bdevs_operational": 4, 00:20:40.261 "base_bdevs_list": [ 00:20:40.262 { 00:20:40.262 "name": "NewBaseBdev", 00:20:40.262 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:40.262 "is_configured": true, 00:20:40.262 "data_offset": 2048, 00:20:40.262 "data_size": 63488 00:20:40.262 }, 00:20:40.262 { 00:20:40.262 "name": "BaseBdev2", 00:20:40.262 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:40.262 "is_configured": true, 00:20:40.262 "data_offset": 2048, 00:20:40.262 "data_size": 63488 00:20:40.262 }, 00:20:40.262 { 00:20:40.262 "name": "BaseBdev3", 00:20:40.262 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:40.262 "is_configured": true, 00:20:40.262 "data_offset": 2048, 00:20:40.262 "data_size": 63488 00:20:40.262 }, 00:20:40.262 { 00:20:40.262 "name": "BaseBdev4", 00:20:40.262 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:40.262 "is_configured": true, 00:20:40.262 "data_offset": 2048, 00:20:40.262 "data_size": 63488 00:20:40.262 } 00:20:40.262 ] 00:20:40.262 } 00:20:40.262 } 00:20:40.262 }' 00:20:40.262 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.262 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:40.262 BaseBdev2 00:20:40.262 BaseBdev3 00:20:40.262 BaseBdev4' 00:20:40.262 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.262 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:40.262 10:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.521 "name": "NewBaseBdev", 00:20:40.521 "aliases": [ 00:20:40.521 "4516c302-8879-4aa4-9135-1e71a4b18a11" 00:20:40.521 ], 00:20:40.521 "product_name": "Malloc disk", 00:20:40.521 "block_size": 512, 00:20:40.521 "num_blocks": 65536, 00:20:40.521 "uuid": "4516c302-8879-4aa4-9135-1e71a4b18a11", 00:20:40.521 "assigned_rate_limits": { 00:20:40.521 "rw_ios_per_sec": 0, 00:20:40.521 "rw_mbytes_per_sec": 0, 00:20:40.521 "r_mbytes_per_sec": 0, 00:20:40.521 "w_mbytes_per_sec": 0 00:20:40.521 }, 00:20:40.521 "claimed": true, 00:20:40.521 "claim_type": "exclusive_write", 00:20:40.521 "zoned": false, 00:20:40.521 "supported_io_types": { 00:20:40.521 "read": true, 00:20:40.521 "write": true, 00:20:40.521 "unmap": true, 00:20:40.521 "flush": true, 00:20:40.521 "reset": true, 00:20:40.521 "nvme_admin": false, 00:20:40.521 "nvme_io": false, 00:20:40.521 "nvme_io_md": false, 00:20:40.521 "write_zeroes": true, 00:20:40.521 "zcopy": true, 00:20:40.521 "get_zone_info": false, 00:20:40.521 "zone_management": false, 00:20:40.521 "zone_append": false, 00:20:40.521 "compare": false, 00:20:40.521 "compare_and_write": false, 00:20:40.521 "abort": true, 00:20:40.521 "seek_hole": false, 00:20:40.521 "seek_data": false, 00:20:40.521 "copy": true, 00:20:40.521 "nvme_iov_md": false 00:20:40.521 }, 00:20:40.521 "memory_domains": [ 00:20:40.521 { 00:20:40.521 "dma_device_id": "system", 00:20:40.521 "dma_device_type": 1 00:20:40.521 }, 00:20:40.521 { 00:20:40.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.521 "dma_device_type": 2 00:20:40.521 } 00:20:40.521 ], 00:20:40.521 "driver_specific": {} 00:20:40.521 }' 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.521 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:40.779 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.037 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.037 "name": "BaseBdev2", 00:20:41.037 "aliases": [ 00:20:41.037 "37597e36-1373-477e-9cf4-827048afe63e" 00:20:41.037 ], 00:20:41.037 "product_name": "Malloc disk", 00:20:41.037 "block_size": 512, 00:20:41.037 "num_blocks": 65536, 00:20:41.037 "uuid": "37597e36-1373-477e-9cf4-827048afe63e", 00:20:41.037 "assigned_rate_limits": { 00:20:41.037 "rw_ios_per_sec": 0, 00:20:41.037 "rw_mbytes_per_sec": 0, 00:20:41.037 "r_mbytes_per_sec": 0, 00:20:41.037 "w_mbytes_per_sec": 0 00:20:41.037 }, 00:20:41.037 "claimed": true, 00:20:41.037 "claim_type": "exclusive_write", 00:20:41.038 "zoned": false, 00:20:41.038 "supported_io_types": { 00:20:41.038 "read": true, 00:20:41.038 "write": true, 00:20:41.038 "unmap": true, 00:20:41.038 "flush": true, 00:20:41.038 "reset": true, 00:20:41.038 "nvme_admin": false, 00:20:41.038 "nvme_io": false, 00:20:41.038 "nvme_io_md": false, 00:20:41.038 "write_zeroes": true, 00:20:41.038 "zcopy": true, 00:20:41.038 "get_zone_info": false, 00:20:41.038 "zone_management": false, 00:20:41.038 "zone_append": false, 00:20:41.038 "compare": false, 00:20:41.038 "compare_and_write": false, 00:20:41.038 "abort": true, 00:20:41.038 "seek_hole": false, 00:20:41.038 "seek_data": false, 00:20:41.038 "copy": true, 00:20:41.038 "nvme_iov_md": false 00:20:41.038 }, 00:20:41.038 "memory_domains": [ 00:20:41.038 { 00:20:41.038 "dma_device_id": "system", 00:20:41.038 "dma_device_type": 1 00:20:41.038 }, 00:20:41.038 { 00:20:41.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.038 "dma_device_type": 2 00:20:41.038 } 00:20:41.038 ], 00:20:41.038 "driver_specific": {} 00:20:41.038 }' 00:20:41.038 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.038 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.038 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:41.038 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.038 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.296 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.296 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.296 10:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:41.296 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.556 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.556 "name": "BaseBdev3", 00:20:41.556 "aliases": [ 00:20:41.556 "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463" 00:20:41.556 ], 00:20:41.556 "product_name": "Malloc disk", 00:20:41.556 "block_size": 512, 00:20:41.556 "num_blocks": 65536, 00:20:41.556 "uuid": "e5df7f31-3fbb-4fa6-9fab-05fc1cb14463", 00:20:41.556 "assigned_rate_limits": { 00:20:41.556 "rw_ios_per_sec": 0, 00:20:41.556 "rw_mbytes_per_sec": 0, 00:20:41.556 "r_mbytes_per_sec": 0, 00:20:41.556 "w_mbytes_per_sec": 0 00:20:41.556 }, 00:20:41.556 "claimed": true, 00:20:41.556 "claim_type": "exclusive_write", 00:20:41.556 "zoned": false, 00:20:41.556 "supported_io_types": { 00:20:41.556 "read": true, 00:20:41.556 "write": true, 00:20:41.556 "unmap": true, 00:20:41.556 "flush": true, 00:20:41.556 "reset": true, 00:20:41.556 "nvme_admin": false, 00:20:41.556 "nvme_io": false, 00:20:41.556 "nvme_io_md": false, 00:20:41.556 "write_zeroes": true, 00:20:41.556 "zcopy": true, 00:20:41.556 "get_zone_info": false, 00:20:41.556 "zone_management": false, 00:20:41.556 "zone_append": false, 00:20:41.556 "compare": false, 00:20:41.556 "compare_and_write": false, 00:20:41.556 "abort": true, 00:20:41.556 "seek_hole": false, 00:20:41.556 "seek_data": false, 00:20:41.556 "copy": true, 00:20:41.556 "nvme_iov_md": false 00:20:41.556 }, 00:20:41.556 "memory_domains": [ 00:20:41.556 { 00:20:41.556 "dma_device_id": "system", 00:20:41.556 "dma_device_type": 1 00:20:41.556 }, 00:20:41.556 { 00:20:41.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.556 "dma_device_type": 2 00:20:41.556 } 00:20:41.556 ], 00:20:41.556 "driver_specific": {} 00:20:41.556 }' 00:20:41.556 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.556 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.556 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:41.556 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:41.815 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:42.074 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:42.074 "name": "BaseBdev4", 00:20:42.074 "aliases": [ 00:20:42.074 "5b8f6fd1-8efa-423b-8d6a-8cd98a823157" 00:20:42.074 ], 00:20:42.074 "product_name": "Malloc disk", 00:20:42.074 "block_size": 512, 00:20:42.074 "num_blocks": 65536, 00:20:42.074 "uuid": "5b8f6fd1-8efa-423b-8d6a-8cd98a823157", 00:20:42.074 "assigned_rate_limits": { 00:20:42.074 "rw_ios_per_sec": 0, 00:20:42.074 "rw_mbytes_per_sec": 0, 00:20:42.074 "r_mbytes_per_sec": 0, 00:20:42.074 "w_mbytes_per_sec": 0 00:20:42.074 }, 00:20:42.074 "claimed": true, 00:20:42.074 "claim_type": "exclusive_write", 00:20:42.074 "zoned": false, 00:20:42.074 "supported_io_types": { 00:20:42.074 "read": true, 00:20:42.074 "write": true, 00:20:42.074 "unmap": true, 00:20:42.074 "flush": true, 00:20:42.074 "reset": true, 00:20:42.074 "nvme_admin": false, 00:20:42.074 "nvme_io": false, 00:20:42.074 "nvme_io_md": false, 00:20:42.074 "write_zeroes": true, 00:20:42.074 "zcopy": true, 00:20:42.074 "get_zone_info": false, 00:20:42.074 "zone_management": false, 00:20:42.074 "zone_append": false, 00:20:42.074 "compare": false, 00:20:42.074 "compare_and_write": false, 00:20:42.074 "abort": true, 00:20:42.074 "seek_hole": false, 00:20:42.074 "seek_data": false, 00:20:42.074 "copy": true, 00:20:42.074 "nvme_iov_md": false 00:20:42.074 }, 00:20:42.074 "memory_domains": [ 00:20:42.074 { 00:20:42.074 "dma_device_id": "system", 00:20:42.074 "dma_device_type": 1 00:20:42.074 }, 00:20:42.074 { 00:20:42.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.074 "dma_device_type": 2 00:20:42.074 } 00:20:42.074 ], 00:20:42.074 "driver_specific": {} 00:20:42.074 }' 00:20:42.074 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.074 10:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.332 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:42.591 [2024-07-26 10:30:55.468640] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.591 [2024-07-26 10:30:55.468666] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.591 [2024-07-26 10:30:55.468713] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.591 [2024-07-26 10:30:55.468767] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.591 [2024-07-26 10:30:55.468778] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be2cb0 name Existed_Raid, state offline 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3424434 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3424434 ']' 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3424434 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:42.591 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3424434 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3424434' 00:20:42.851 killing process with pid 3424434 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3424434 00:20:42.851 [2024-07-26 10:30:55.542327] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3424434 00:20:42.851 [2024-07-26 10:30:55.573697] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:42.851 00:20:42.851 real 0m30.412s 00:20:42.851 user 0m55.780s 00:20:42.851 sys 0m5.470s 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.851 10:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.851 ************************************ 00:20:42.851 END TEST raid_state_function_test_sb 00:20:42.851 ************************************ 00:20:43.111 10:30:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:20:43.111 10:30:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:43.111 10:30:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.111 10:30:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.111 ************************************ 00:20:43.111 START TEST raid_superblock_test 00:20:43.111 ************************************ 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3430270 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3430270 /var/tmp/spdk-raid.sock 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3430270 ']' 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:43.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.111 10:30:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.111 [2024-07-26 10:30:55.892589] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:20:43.111 [2024-07-26 10:30:55.892644] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430270 ] 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:43.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.111 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:43.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.112 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:43.371 [2024-07-26 10:30:56.024857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.371 [2024-07-26 10:30:56.069243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.371 [2024-07-26 10:30:56.129298] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.371 [2024-07-26 10:30:56.129334] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.939 10:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.939 10:30:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:43.939 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.940 10:30:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:44.199 malloc1 00:20:44.199 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:44.458 [2024-07-26 10:30:57.205333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:44.458 [2024-07-26 10:30:57.205375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.458 [2024-07-26 10:30:57.205394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f6270 00:20:44.458 [2024-07-26 10:30:57.205406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.458 [2024-07-26 10:30:57.206811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.458 [2024-07-26 10:30:57.206838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:44.458 pt1 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.458 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:44.716 malloc2 00:20:44.716 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:44.975 [2024-07-26 10:30:57.666826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:44.975 [2024-07-26 10:30:57.666871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.975 [2024-07-26 10:30:57.666886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b22f0 00:20:44.975 [2024-07-26 10:30:57.666897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.975 [2024-07-26 10:30:57.668338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.975 [2024-07-26 10:30:57.668363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:44.975 pt2 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:44.975 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:45.234 malloc3 00:20:45.234 10:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:45.234 [2024-07-26 10:30:58.124358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:45.234 [2024-07-26 10:30:58.124400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.234 [2024-07-26 10:30:58.124416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x107c650 00:20:45.234 [2024-07-26 10:30:58.124427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.234 [2024-07-26 10:30:58.125892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.234 [2024-07-26 10:30:58.125919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:45.234 pt3 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:45.493 malloc4 00:20:45.493 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:45.752 [2024-07-26 10:30:58.557770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:45.752 [2024-07-26 10:30:58.557811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.752 [2024-07-26 10:30:58.557827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x107dce0 00:20:45.752 [2024-07-26 10:30:58.557838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.752 [2024-07-26 10:30:58.559175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.752 [2024-07-26 10:30:58.559201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:45.752 pt4 00:20:45.752 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:45.752 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:45.752 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:46.011 [2024-07-26 10:30:58.782385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:46.011 [2024-07-26 10:30:58.783561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:46.011 [2024-07-26 10:30:58.783611] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:46.011 [2024-07-26 10:30:58.783654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:46.011 [2024-07-26 10:30:58.783792] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10806e0 00:20:46.011 [2024-07-26 10:30:58.783802] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:46.011 [2024-07-26 10:30:58.783983] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf5b320 00:20:46.011 [2024-07-26 10:30:58.784107] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10806e0 00:20:46.011 [2024-07-26 10:30:58.784116] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10806e0 00:20:46.011 [2024-07-26 10:30:58.784228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.011 10:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.270 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.270 "name": "raid_bdev1", 00:20:46.270 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:46.270 "strip_size_kb": 64, 00:20:46.270 "state": "online", 00:20:46.270 "raid_level": "raid0", 00:20:46.270 "superblock": true, 00:20:46.270 "num_base_bdevs": 4, 00:20:46.270 "num_base_bdevs_discovered": 4, 00:20:46.270 "num_base_bdevs_operational": 4, 00:20:46.270 "base_bdevs_list": [ 00:20:46.270 { 00:20:46.270 "name": "pt1", 00:20:46.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:46.270 "is_configured": true, 00:20:46.270 "data_offset": 2048, 00:20:46.270 "data_size": 63488 00:20:46.270 }, 00:20:46.270 { 00:20:46.270 "name": "pt2", 00:20:46.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:46.270 "is_configured": true, 00:20:46.270 "data_offset": 2048, 00:20:46.270 "data_size": 63488 00:20:46.270 }, 00:20:46.270 { 00:20:46.270 "name": "pt3", 00:20:46.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:46.270 "is_configured": true, 00:20:46.270 "data_offset": 2048, 00:20:46.270 "data_size": 63488 00:20:46.270 }, 00:20:46.270 { 00:20:46.270 "name": "pt4", 00:20:46.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:46.270 "is_configured": true, 00:20:46.270 "data_offset": 2048, 00:20:46.270 "data_size": 63488 00:20:46.270 } 00:20:46.270 ] 00:20:46.270 }' 00:20:46.270 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.270 10:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:46.837 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:47.096 [2024-07-26 10:30:59.781268] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.096 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:47.096 "name": "raid_bdev1", 00:20:47.096 "aliases": [ 00:20:47.096 "67ea419f-e027-4ba1-b466-3717e69f325f" 00:20:47.096 ], 00:20:47.096 "product_name": "Raid Volume", 00:20:47.096 "block_size": 512, 00:20:47.096 "num_blocks": 253952, 00:20:47.096 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:47.096 "assigned_rate_limits": { 00:20:47.096 "rw_ios_per_sec": 0, 00:20:47.096 "rw_mbytes_per_sec": 0, 00:20:47.096 "r_mbytes_per_sec": 0, 00:20:47.096 "w_mbytes_per_sec": 0 00:20:47.096 }, 00:20:47.096 "claimed": false, 00:20:47.096 "zoned": false, 00:20:47.096 "supported_io_types": { 00:20:47.096 "read": true, 00:20:47.096 "write": true, 00:20:47.096 "unmap": true, 00:20:47.096 "flush": true, 00:20:47.096 "reset": true, 00:20:47.096 "nvme_admin": false, 00:20:47.096 "nvme_io": false, 00:20:47.096 "nvme_io_md": false, 00:20:47.096 "write_zeroes": true, 00:20:47.096 "zcopy": false, 00:20:47.096 "get_zone_info": false, 00:20:47.096 "zone_management": false, 00:20:47.096 "zone_append": false, 00:20:47.096 "compare": false, 00:20:47.096 "compare_and_write": false, 00:20:47.096 "abort": false, 00:20:47.096 "seek_hole": false, 00:20:47.096 "seek_data": false, 00:20:47.096 "copy": false, 00:20:47.096 "nvme_iov_md": false 00:20:47.096 }, 00:20:47.096 "memory_domains": [ 00:20:47.096 { 00:20:47.096 "dma_device_id": "system", 00:20:47.096 "dma_device_type": 1 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.096 "dma_device_type": 2 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "system", 00:20:47.096 "dma_device_type": 1 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.096 "dma_device_type": 2 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "system", 00:20:47.096 "dma_device_type": 1 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.096 "dma_device_type": 2 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "system", 00:20:47.096 "dma_device_type": 1 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.096 "dma_device_type": 2 00:20:47.096 } 00:20:47.096 ], 00:20:47.096 "driver_specific": { 00:20:47.096 "raid": { 00:20:47.096 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:47.096 "strip_size_kb": 64, 00:20:47.096 "state": "online", 00:20:47.096 "raid_level": "raid0", 00:20:47.096 "superblock": true, 00:20:47.096 "num_base_bdevs": 4, 00:20:47.096 "num_base_bdevs_discovered": 4, 00:20:47.096 "num_base_bdevs_operational": 4, 00:20:47.096 "base_bdevs_list": [ 00:20:47.096 { 00:20:47.096 "name": "pt1", 00:20:47.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:47.096 "is_configured": true, 00:20:47.096 "data_offset": 2048, 00:20:47.096 "data_size": 63488 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "name": "pt2", 00:20:47.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.096 "is_configured": true, 00:20:47.096 "data_offset": 2048, 00:20:47.096 "data_size": 63488 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "name": "pt3", 00:20:47.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:47.096 "is_configured": true, 00:20:47.096 "data_offset": 2048, 00:20:47.096 "data_size": 63488 00:20:47.096 }, 00:20:47.096 { 00:20:47.096 "name": "pt4", 00:20:47.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:47.096 "is_configured": true, 00:20:47.096 "data_offset": 2048, 00:20:47.096 "data_size": 63488 00:20:47.096 } 00:20:47.096 ] 00:20:47.096 } 00:20:47.096 } 00:20:47.096 }' 00:20:47.097 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.097 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:47.097 pt2 00:20:47.097 pt3 00:20:47.097 pt4' 00:20:47.097 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.097 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:47.097 10:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.355 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:47.355 "name": "pt1", 00:20:47.355 "aliases": [ 00:20:47.355 "00000000-0000-0000-0000-000000000001" 00:20:47.355 ], 00:20:47.355 "product_name": "passthru", 00:20:47.355 "block_size": 512, 00:20:47.355 "num_blocks": 65536, 00:20:47.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:47.355 "assigned_rate_limits": { 00:20:47.355 "rw_ios_per_sec": 0, 00:20:47.356 "rw_mbytes_per_sec": 0, 00:20:47.356 "r_mbytes_per_sec": 0, 00:20:47.356 "w_mbytes_per_sec": 0 00:20:47.356 }, 00:20:47.356 "claimed": true, 00:20:47.356 "claim_type": "exclusive_write", 00:20:47.356 "zoned": false, 00:20:47.356 "supported_io_types": { 00:20:47.356 "read": true, 00:20:47.356 "write": true, 00:20:47.356 "unmap": true, 00:20:47.356 "flush": true, 00:20:47.356 "reset": true, 00:20:47.356 "nvme_admin": false, 00:20:47.356 "nvme_io": false, 00:20:47.356 "nvme_io_md": false, 00:20:47.356 "write_zeroes": true, 00:20:47.356 "zcopy": true, 00:20:47.356 "get_zone_info": false, 00:20:47.356 "zone_management": false, 00:20:47.356 "zone_append": false, 00:20:47.356 "compare": false, 00:20:47.356 "compare_and_write": false, 00:20:47.356 "abort": true, 00:20:47.356 "seek_hole": false, 00:20:47.356 "seek_data": false, 00:20:47.356 "copy": true, 00:20:47.356 "nvme_iov_md": false 00:20:47.356 }, 00:20:47.356 "memory_domains": [ 00:20:47.356 { 00:20:47.356 "dma_device_id": "system", 00:20:47.356 "dma_device_type": 1 00:20:47.356 }, 00:20:47.356 { 00:20:47.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.356 "dma_device_type": 2 00:20:47.356 } 00:20:47.356 ], 00:20:47.356 "driver_specific": { 00:20:47.356 "passthru": { 00:20:47.356 "name": "pt1", 00:20:47.356 "base_bdev_name": "malloc1" 00:20:47.356 } 00:20:47.356 } 00:20:47.356 }' 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.356 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:47.615 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:47.874 "name": "pt2", 00:20:47.874 "aliases": [ 00:20:47.874 "00000000-0000-0000-0000-000000000002" 00:20:47.874 ], 00:20:47.874 "product_name": "passthru", 00:20:47.874 "block_size": 512, 00:20:47.874 "num_blocks": 65536, 00:20:47.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.874 "assigned_rate_limits": { 00:20:47.874 "rw_ios_per_sec": 0, 00:20:47.874 "rw_mbytes_per_sec": 0, 00:20:47.874 "r_mbytes_per_sec": 0, 00:20:47.874 "w_mbytes_per_sec": 0 00:20:47.874 }, 00:20:47.874 "claimed": true, 00:20:47.874 "claim_type": "exclusive_write", 00:20:47.874 "zoned": false, 00:20:47.874 "supported_io_types": { 00:20:47.874 "read": true, 00:20:47.874 "write": true, 00:20:47.874 "unmap": true, 00:20:47.874 "flush": true, 00:20:47.874 "reset": true, 00:20:47.874 "nvme_admin": false, 00:20:47.874 "nvme_io": false, 00:20:47.874 "nvme_io_md": false, 00:20:47.874 "write_zeroes": true, 00:20:47.874 "zcopy": true, 00:20:47.874 "get_zone_info": false, 00:20:47.874 "zone_management": false, 00:20:47.874 "zone_append": false, 00:20:47.874 "compare": false, 00:20:47.874 "compare_and_write": false, 00:20:47.874 "abort": true, 00:20:47.874 "seek_hole": false, 00:20:47.874 "seek_data": false, 00:20:47.874 "copy": true, 00:20:47.874 "nvme_iov_md": false 00:20:47.874 }, 00:20:47.874 "memory_domains": [ 00:20:47.874 { 00:20:47.874 "dma_device_id": "system", 00:20:47.874 "dma_device_type": 1 00:20:47.874 }, 00:20:47.874 { 00:20:47.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.874 "dma_device_type": 2 00:20:47.874 } 00:20:47.874 ], 00:20:47.874 "driver_specific": { 00:20:47.874 "passthru": { 00:20:47.874 "name": "pt2", 00:20:47.874 "base_bdev_name": "malloc2" 00:20:47.874 } 00:20:47.874 } 00:20:47.874 }' 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.874 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:48.133 10:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.392 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.392 "name": "pt3", 00:20:48.392 "aliases": [ 00:20:48.392 "00000000-0000-0000-0000-000000000003" 00:20:48.392 ], 00:20:48.392 "product_name": "passthru", 00:20:48.392 "block_size": 512, 00:20:48.392 "num_blocks": 65536, 00:20:48.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:48.392 "assigned_rate_limits": { 00:20:48.392 "rw_ios_per_sec": 0, 00:20:48.392 "rw_mbytes_per_sec": 0, 00:20:48.392 "r_mbytes_per_sec": 0, 00:20:48.392 "w_mbytes_per_sec": 0 00:20:48.392 }, 00:20:48.392 "claimed": true, 00:20:48.392 "claim_type": "exclusive_write", 00:20:48.392 "zoned": false, 00:20:48.392 "supported_io_types": { 00:20:48.393 "read": true, 00:20:48.393 "write": true, 00:20:48.393 "unmap": true, 00:20:48.393 "flush": true, 00:20:48.393 "reset": true, 00:20:48.393 "nvme_admin": false, 00:20:48.393 "nvme_io": false, 00:20:48.393 "nvme_io_md": false, 00:20:48.393 "write_zeroes": true, 00:20:48.393 "zcopy": true, 00:20:48.393 "get_zone_info": false, 00:20:48.393 "zone_management": false, 00:20:48.393 "zone_append": false, 00:20:48.393 "compare": false, 00:20:48.393 "compare_and_write": false, 00:20:48.393 "abort": true, 00:20:48.393 "seek_hole": false, 00:20:48.393 "seek_data": false, 00:20:48.393 "copy": true, 00:20:48.393 "nvme_iov_md": false 00:20:48.393 }, 00:20:48.393 "memory_domains": [ 00:20:48.393 { 00:20:48.393 "dma_device_id": "system", 00:20:48.393 "dma_device_type": 1 00:20:48.393 }, 00:20:48.393 { 00:20:48.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.393 "dma_device_type": 2 00:20:48.393 } 00:20:48.393 ], 00:20:48.393 "driver_specific": { 00:20:48.393 "passthru": { 00:20:48.393 "name": "pt3", 00:20:48.393 "base_bdev_name": "malloc3" 00:20:48.393 } 00:20:48.393 } 00:20:48.393 }' 00:20:48.393 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.393 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.393 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.393 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:48.652 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.912 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.912 "name": "pt4", 00:20:48.912 "aliases": [ 00:20:48.912 "00000000-0000-0000-0000-000000000004" 00:20:48.912 ], 00:20:48.912 "product_name": "passthru", 00:20:48.912 "block_size": 512, 00:20:48.912 "num_blocks": 65536, 00:20:48.912 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:48.912 "assigned_rate_limits": { 00:20:48.912 "rw_ios_per_sec": 0, 00:20:48.912 "rw_mbytes_per_sec": 0, 00:20:48.912 "r_mbytes_per_sec": 0, 00:20:48.912 "w_mbytes_per_sec": 0 00:20:48.912 }, 00:20:48.912 "claimed": true, 00:20:48.912 "claim_type": "exclusive_write", 00:20:48.912 "zoned": false, 00:20:48.912 "supported_io_types": { 00:20:48.912 "read": true, 00:20:48.912 "write": true, 00:20:48.912 "unmap": true, 00:20:48.912 "flush": true, 00:20:48.912 "reset": true, 00:20:48.912 "nvme_admin": false, 00:20:48.912 "nvme_io": false, 00:20:48.912 "nvme_io_md": false, 00:20:48.912 "write_zeroes": true, 00:20:48.912 "zcopy": true, 00:20:48.912 "get_zone_info": false, 00:20:48.912 "zone_management": false, 00:20:48.912 "zone_append": false, 00:20:48.912 "compare": false, 00:20:48.912 "compare_and_write": false, 00:20:48.912 "abort": true, 00:20:48.912 "seek_hole": false, 00:20:48.912 "seek_data": false, 00:20:48.912 "copy": true, 00:20:48.912 "nvme_iov_md": false 00:20:48.912 }, 00:20:48.912 "memory_domains": [ 00:20:48.912 { 00:20:48.912 "dma_device_id": "system", 00:20:48.912 "dma_device_type": 1 00:20:48.912 }, 00:20:48.912 { 00:20:48.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.912 "dma_device_type": 2 00:20:48.912 } 00:20:48.912 ], 00:20:48.912 "driver_specific": { 00:20:48.912 "passthru": { 00:20:48.912 "name": "pt4", 00:20:48.912 "base_bdev_name": "malloc4" 00:20:48.912 } 00:20:48.912 } 00:20:48.912 }' 00:20:48.912 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.912 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.171 10:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.171 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:49.171 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.171 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:49.431 [2024-07-26 10:31:02.303944] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=67ea419f-e027-4ba1-b466-3717e69f325f 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 67ea419f-e027-4ba1-b466-3717e69f325f ']' 00:20:49.431 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:49.690 [2024-07-26 10:31:02.528237] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.690 [2024-07-26 10:31:02.528252] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.690 [2024-07-26 10:31:02.528295] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.690 [2024-07-26 10:31:02.528353] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.690 [2024-07-26 10:31:02.528364] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10806e0 name raid_bdev1, state offline 00:20:49.690 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.690 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:49.950 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:49.950 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:49.950 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:49.950 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:50.209 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:50.209 10:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:50.468 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:50.468 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:50.726 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:50.727 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:50.985 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:50.985 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:51.244 10:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:51.244 [2024-07-26 10:31:04.104308] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:51.244 [2024-07-26 10:31:04.105544] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:51.244 [2024-07-26 10:31:04.105584] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:51.244 [2024-07-26 10:31:04.105616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:51.244 [2024-07-26 10:31:04.105658] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:51.244 [2024-07-26 10:31:04.105694] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:51.244 [2024-07-26 10:31:04.105714] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:51.244 [2024-07-26 10:31:04.105734] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:51.244 [2024-07-26 10:31:04.105753] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.244 [2024-07-26 10:31:04.105762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10806e0 name raid_bdev1, state configuring 00:20:51.244 request: 00:20:51.244 { 00:20:51.244 "name": "raid_bdev1", 00:20:51.244 "raid_level": "raid0", 00:20:51.244 "base_bdevs": [ 00:20:51.244 "malloc1", 00:20:51.244 "malloc2", 00:20:51.244 "malloc3", 00:20:51.244 "malloc4" 00:20:51.244 ], 00:20:51.244 "strip_size_kb": 64, 00:20:51.244 "superblock": false, 00:20:51.244 "method": "bdev_raid_create", 00:20:51.244 "req_id": 1 00:20:51.244 } 00:20:51.244 Got JSON-RPC error response 00:20:51.244 response: 00:20:51.244 { 00:20:51.244 "code": -17, 00:20:51.244 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:51.244 } 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.244 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:51.502 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:51.502 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:51.502 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:51.761 [2024-07-26 10:31:04.557440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:51.761 [2024-07-26 10:31:04.557478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.761 [2024-07-26 10:31:04.557493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b1a60 00:20:51.761 [2024-07-26 10:31:04.557504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.761 [2024-07-26 10:31:04.558945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.761 [2024-07-26 10:31:04.558971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:51.761 [2024-07-26 10:31:04.559026] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:51.761 [2024-07-26 10:31:04.559048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:51.761 pt1 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.761 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.021 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.021 "name": "raid_bdev1", 00:20:52.021 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:52.021 "strip_size_kb": 64, 00:20:52.021 "state": "configuring", 00:20:52.021 "raid_level": "raid0", 00:20:52.021 "superblock": true, 00:20:52.021 "num_base_bdevs": 4, 00:20:52.021 "num_base_bdevs_discovered": 1, 00:20:52.021 "num_base_bdevs_operational": 4, 00:20:52.021 "base_bdevs_list": [ 00:20:52.021 { 00:20:52.021 "name": "pt1", 00:20:52.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.021 "is_configured": true, 00:20:52.021 "data_offset": 2048, 00:20:52.021 "data_size": 63488 00:20:52.021 }, 00:20:52.021 { 00:20:52.021 "name": null, 00:20:52.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.021 "is_configured": false, 00:20:52.021 "data_offset": 2048, 00:20:52.021 "data_size": 63488 00:20:52.021 }, 00:20:52.021 { 00:20:52.021 "name": null, 00:20:52.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:52.021 "is_configured": false, 00:20:52.021 "data_offset": 2048, 00:20:52.021 "data_size": 63488 00:20:52.021 }, 00:20:52.021 { 00:20:52.021 "name": null, 00:20:52.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:52.021 "is_configured": false, 00:20:52.021 "data_offset": 2048, 00:20:52.021 "data_size": 63488 00:20:52.021 } 00:20:52.021 ] 00:20:52.021 }' 00:20:52.021 10:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.021 10:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.589 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:20:52.589 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:52.849 [2024-07-26 10:31:05.580152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:52.849 [2024-07-26 10:31:05.580204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.849 [2024-07-26 10:31:05.580224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10806e0 00:20:52.849 [2024-07-26 10:31:05.580235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.849 [2024-07-26 10:31:05.580547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.849 [2024-07-26 10:31:05.580563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:52.849 [2024-07-26 10:31:05.580619] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:52.849 [2024-07-26 10:31:05.580636] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:52.849 pt2 00:20:52.849 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:53.112 [2024-07-26 10:31:05.804801] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.112 10:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.371 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.371 "name": "raid_bdev1", 00:20:53.371 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:53.371 "strip_size_kb": 64, 00:20:53.371 "state": "configuring", 00:20:53.371 "raid_level": "raid0", 00:20:53.371 "superblock": true, 00:20:53.371 "num_base_bdevs": 4, 00:20:53.371 "num_base_bdevs_discovered": 1, 00:20:53.371 "num_base_bdevs_operational": 4, 00:20:53.371 "base_bdevs_list": [ 00:20:53.371 { 00:20:53.371 "name": "pt1", 00:20:53.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:53.371 "is_configured": true, 00:20:53.371 "data_offset": 2048, 00:20:53.371 "data_size": 63488 00:20:53.371 }, 00:20:53.371 { 00:20:53.371 "name": null, 00:20:53.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.371 "is_configured": false, 00:20:53.371 "data_offset": 2048, 00:20:53.371 "data_size": 63488 00:20:53.371 }, 00:20:53.371 { 00:20:53.371 "name": null, 00:20:53.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:53.371 "is_configured": false, 00:20:53.371 "data_offset": 2048, 00:20:53.371 "data_size": 63488 00:20:53.371 }, 00:20:53.371 { 00:20:53.371 "name": null, 00:20:53.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:53.371 "is_configured": false, 00:20:53.371 "data_offset": 2048, 00:20:53.371 "data_size": 63488 00:20:53.371 } 00:20:53.371 ] 00:20:53.371 }' 00:20:53.371 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.371 10:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:53.940 [2024-07-26 10:31:06.795393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:53.940 [2024-07-26 10:31:06.795444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.940 [2024-07-26 10:31:06.795461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10813b0 00:20:53.940 [2024-07-26 10:31:06.795473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.940 [2024-07-26 10:31:06.795781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.940 [2024-07-26 10:31:06.795797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:53.940 [2024-07-26 10:31:06.795855] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:53.940 [2024-07-26 10:31:06.795872] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.940 pt2 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:53.940 10:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:54.199 [2024-07-26 10:31:07.023998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:54.199 [2024-07-26 10:31:07.024041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.199 [2024-07-26 10:31:07.024060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f73a0 00:20:54.199 [2024-07-26 10:31:07.024071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.199 [2024-07-26 10:31:07.024370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.199 [2024-07-26 10:31:07.024387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:54.199 [2024-07-26 10:31:07.024449] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:54.200 [2024-07-26 10:31:07.024465] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:54.200 pt3 00:20:54.200 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:54.200 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:54.200 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:54.459 [2024-07-26 10:31:07.248593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:54.459 [2024-07-26 10:31:07.248630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.459 [2024-07-26 10:31:07.248648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf43080 00:20:54.459 [2024-07-26 10:31:07.248659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.459 [2024-07-26 10:31:07.248931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.459 [2024-07-26 10:31:07.248947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:54.459 [2024-07-26 10:31:07.248999] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:54.459 [2024-07-26 10:31:07.249015] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:54.459 [2024-07-26 10:31:07.249123] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x107eed0 00:20:54.459 [2024-07-26 10:31:07.249132] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:54.459 [2024-07-26 10:31:07.249294] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1084300 00:20:54.459 [2024-07-26 10:31:07.249411] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x107eed0 00:20:54.459 [2024-07-26 10:31:07.249420] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x107eed0 00:20:54.459 [2024-07-26 10:31:07.249507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.459 pt4 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.459 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.719 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.719 "name": "raid_bdev1", 00:20:54.719 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:54.719 "strip_size_kb": 64, 00:20:54.719 "state": "online", 00:20:54.719 "raid_level": "raid0", 00:20:54.719 "superblock": true, 00:20:54.719 "num_base_bdevs": 4, 00:20:54.719 "num_base_bdevs_discovered": 4, 00:20:54.719 "num_base_bdevs_operational": 4, 00:20:54.719 "base_bdevs_list": [ 00:20:54.719 { 00:20:54.719 "name": "pt1", 00:20:54.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.719 "is_configured": true, 00:20:54.719 "data_offset": 2048, 00:20:54.719 "data_size": 63488 00:20:54.719 }, 00:20:54.719 { 00:20:54.719 "name": "pt2", 00:20:54.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.719 "is_configured": true, 00:20:54.719 "data_offset": 2048, 00:20:54.719 "data_size": 63488 00:20:54.719 }, 00:20:54.719 { 00:20:54.719 "name": "pt3", 00:20:54.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.719 "is_configured": true, 00:20:54.719 "data_offset": 2048, 00:20:54.719 "data_size": 63488 00:20:54.719 }, 00:20:54.719 { 00:20:54.719 "name": "pt4", 00:20:54.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:54.719 "is_configured": true, 00:20:54.719 "data_offset": 2048, 00:20:54.719 "data_size": 63488 00:20:54.719 } 00:20:54.719 ] 00:20:54.719 }' 00:20:54.719 10:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.719 10:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:55.288 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:55.547 [2024-07-26 10:31:08.215430] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.547 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:55.547 "name": "raid_bdev1", 00:20:55.547 "aliases": [ 00:20:55.547 "67ea419f-e027-4ba1-b466-3717e69f325f" 00:20:55.547 ], 00:20:55.547 "product_name": "Raid Volume", 00:20:55.547 "block_size": 512, 00:20:55.547 "num_blocks": 253952, 00:20:55.547 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:55.547 "assigned_rate_limits": { 00:20:55.547 "rw_ios_per_sec": 0, 00:20:55.547 "rw_mbytes_per_sec": 0, 00:20:55.547 "r_mbytes_per_sec": 0, 00:20:55.547 "w_mbytes_per_sec": 0 00:20:55.547 }, 00:20:55.547 "claimed": false, 00:20:55.547 "zoned": false, 00:20:55.547 "supported_io_types": { 00:20:55.547 "read": true, 00:20:55.547 "write": true, 00:20:55.547 "unmap": true, 00:20:55.547 "flush": true, 00:20:55.547 "reset": true, 00:20:55.547 "nvme_admin": false, 00:20:55.547 "nvme_io": false, 00:20:55.547 "nvme_io_md": false, 00:20:55.547 "write_zeroes": true, 00:20:55.547 "zcopy": false, 00:20:55.547 "get_zone_info": false, 00:20:55.547 "zone_management": false, 00:20:55.547 "zone_append": false, 00:20:55.547 "compare": false, 00:20:55.547 "compare_and_write": false, 00:20:55.547 "abort": false, 00:20:55.548 "seek_hole": false, 00:20:55.548 "seek_data": false, 00:20:55.548 "copy": false, 00:20:55.548 "nvme_iov_md": false 00:20:55.548 }, 00:20:55.548 "memory_domains": [ 00:20:55.548 { 00:20:55.548 "dma_device_id": "system", 00:20:55.548 "dma_device_type": 1 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.548 "dma_device_type": 2 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "system", 00:20:55.548 "dma_device_type": 1 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.548 "dma_device_type": 2 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "system", 00:20:55.548 "dma_device_type": 1 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.548 "dma_device_type": 2 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "system", 00:20:55.548 "dma_device_type": 1 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.548 "dma_device_type": 2 00:20:55.548 } 00:20:55.548 ], 00:20:55.548 "driver_specific": { 00:20:55.548 "raid": { 00:20:55.548 "uuid": "67ea419f-e027-4ba1-b466-3717e69f325f", 00:20:55.548 "strip_size_kb": 64, 00:20:55.548 "state": "online", 00:20:55.548 "raid_level": "raid0", 00:20:55.548 "superblock": true, 00:20:55.548 "num_base_bdevs": 4, 00:20:55.548 "num_base_bdevs_discovered": 4, 00:20:55.548 "num_base_bdevs_operational": 4, 00:20:55.548 "base_bdevs_list": [ 00:20:55.548 { 00:20:55.548 "name": "pt1", 00:20:55.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.548 "is_configured": true, 00:20:55.548 "data_offset": 2048, 00:20:55.548 "data_size": 63488 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "name": "pt2", 00:20:55.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.548 "is_configured": true, 00:20:55.548 "data_offset": 2048, 00:20:55.548 "data_size": 63488 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "name": "pt3", 00:20:55.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:55.548 "is_configured": true, 00:20:55.548 "data_offset": 2048, 00:20:55.548 "data_size": 63488 00:20:55.548 }, 00:20:55.548 { 00:20:55.548 "name": "pt4", 00:20:55.548 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:55.548 "is_configured": true, 00:20:55.548 "data_offset": 2048, 00:20:55.548 "data_size": 63488 00:20:55.548 } 00:20:55.548 ] 00:20:55.548 } 00:20:55.548 } 00:20:55.548 }' 00:20:55.548 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.548 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:55.548 pt2 00:20:55.548 pt3 00:20:55.548 pt4' 00:20:55.548 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.548 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:55.548 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.807 "name": "pt1", 00:20:55.807 "aliases": [ 00:20:55.807 "00000000-0000-0000-0000-000000000001" 00:20:55.807 ], 00:20:55.807 "product_name": "passthru", 00:20:55.807 "block_size": 512, 00:20:55.807 "num_blocks": 65536, 00:20:55.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.807 "assigned_rate_limits": { 00:20:55.807 "rw_ios_per_sec": 0, 00:20:55.807 "rw_mbytes_per_sec": 0, 00:20:55.807 "r_mbytes_per_sec": 0, 00:20:55.807 "w_mbytes_per_sec": 0 00:20:55.807 }, 00:20:55.807 "claimed": true, 00:20:55.807 "claim_type": "exclusive_write", 00:20:55.807 "zoned": false, 00:20:55.807 "supported_io_types": { 00:20:55.807 "read": true, 00:20:55.807 "write": true, 00:20:55.807 "unmap": true, 00:20:55.807 "flush": true, 00:20:55.807 "reset": true, 00:20:55.807 "nvme_admin": false, 00:20:55.807 "nvme_io": false, 00:20:55.807 "nvme_io_md": false, 00:20:55.807 "write_zeroes": true, 00:20:55.807 "zcopy": true, 00:20:55.807 "get_zone_info": false, 00:20:55.807 "zone_management": false, 00:20:55.807 "zone_append": false, 00:20:55.807 "compare": false, 00:20:55.807 "compare_and_write": false, 00:20:55.807 "abort": true, 00:20:55.807 "seek_hole": false, 00:20:55.807 "seek_data": false, 00:20:55.807 "copy": true, 00:20:55.807 "nvme_iov_md": false 00:20:55.807 }, 00:20:55.807 "memory_domains": [ 00:20:55.807 { 00:20:55.807 "dma_device_id": "system", 00:20:55.807 "dma_device_type": 1 00:20:55.807 }, 00:20:55.807 { 00:20:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.807 "dma_device_type": 2 00:20:55.807 } 00:20:55.807 ], 00:20:55.807 "driver_specific": { 00:20:55.807 "passthru": { 00:20:55.807 "name": "pt1", 00:20:55.807 "base_bdev_name": "malloc1" 00:20:55.807 } 00:20:55.807 } 00:20:55.807 }' 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.807 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:56.067 10:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.326 "name": "pt2", 00:20:56.326 "aliases": [ 00:20:56.326 "00000000-0000-0000-0000-000000000002" 00:20:56.326 ], 00:20:56.326 "product_name": "passthru", 00:20:56.326 "block_size": 512, 00:20:56.326 "num_blocks": 65536, 00:20:56.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:56.326 "assigned_rate_limits": { 00:20:56.326 "rw_ios_per_sec": 0, 00:20:56.326 "rw_mbytes_per_sec": 0, 00:20:56.326 "r_mbytes_per_sec": 0, 00:20:56.326 "w_mbytes_per_sec": 0 00:20:56.326 }, 00:20:56.326 "claimed": true, 00:20:56.326 "claim_type": "exclusive_write", 00:20:56.326 "zoned": false, 00:20:56.326 "supported_io_types": { 00:20:56.326 "read": true, 00:20:56.326 "write": true, 00:20:56.326 "unmap": true, 00:20:56.326 "flush": true, 00:20:56.326 "reset": true, 00:20:56.326 "nvme_admin": false, 00:20:56.326 "nvme_io": false, 00:20:56.326 "nvme_io_md": false, 00:20:56.326 "write_zeroes": true, 00:20:56.326 "zcopy": true, 00:20:56.326 "get_zone_info": false, 00:20:56.326 "zone_management": false, 00:20:56.326 "zone_append": false, 00:20:56.326 "compare": false, 00:20:56.326 "compare_and_write": false, 00:20:56.326 "abort": true, 00:20:56.326 "seek_hole": false, 00:20:56.326 "seek_data": false, 00:20:56.326 "copy": true, 00:20:56.326 "nvme_iov_md": false 00:20:56.326 }, 00:20:56.326 "memory_domains": [ 00:20:56.326 { 00:20:56.326 "dma_device_id": "system", 00:20:56.326 "dma_device_type": 1 00:20:56.326 }, 00:20:56.326 { 00:20:56.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.326 "dma_device_type": 2 00:20:56.326 } 00:20:56.326 ], 00:20:56.326 "driver_specific": { 00:20:56.326 "passthru": { 00:20:56.326 "name": "pt2", 00:20:56.326 "base_bdev_name": "malloc2" 00:20:56.326 } 00:20:56.326 } 00:20:56.326 }' 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.326 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.585 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.585 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.585 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.585 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.585 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.586 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.586 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.586 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.586 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:56.586 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.845 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.845 "name": "pt3", 00:20:56.845 "aliases": [ 00:20:56.845 "00000000-0000-0000-0000-000000000003" 00:20:56.845 ], 00:20:56.845 "product_name": "passthru", 00:20:56.845 "block_size": 512, 00:20:56.845 "num_blocks": 65536, 00:20:56.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.845 "assigned_rate_limits": { 00:20:56.845 "rw_ios_per_sec": 0, 00:20:56.845 "rw_mbytes_per_sec": 0, 00:20:56.845 "r_mbytes_per_sec": 0, 00:20:56.845 "w_mbytes_per_sec": 0 00:20:56.845 }, 00:20:56.845 "claimed": true, 00:20:56.845 "claim_type": "exclusive_write", 00:20:56.845 "zoned": false, 00:20:56.845 "supported_io_types": { 00:20:56.845 "read": true, 00:20:56.845 "write": true, 00:20:56.845 "unmap": true, 00:20:56.845 "flush": true, 00:20:56.845 "reset": true, 00:20:56.845 "nvme_admin": false, 00:20:56.845 "nvme_io": false, 00:20:56.845 "nvme_io_md": false, 00:20:56.845 "write_zeroes": true, 00:20:56.845 "zcopy": true, 00:20:56.845 "get_zone_info": false, 00:20:56.845 "zone_management": false, 00:20:56.845 "zone_append": false, 00:20:56.845 "compare": false, 00:20:56.845 "compare_and_write": false, 00:20:56.845 "abort": true, 00:20:56.845 "seek_hole": false, 00:20:56.845 "seek_data": false, 00:20:56.845 "copy": true, 00:20:56.845 "nvme_iov_md": false 00:20:56.845 }, 00:20:56.845 "memory_domains": [ 00:20:56.845 { 00:20:56.845 "dma_device_id": "system", 00:20:56.845 "dma_device_type": 1 00:20:56.845 }, 00:20:56.845 { 00:20:56.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.845 "dma_device_type": 2 00:20:56.845 } 00:20:56.845 ], 00:20:56.845 "driver_specific": { 00:20:56.845 "passthru": { 00:20:56.845 "name": "pt3", 00:20:56.845 "base_bdev_name": "malloc3" 00:20:56.845 } 00:20:56.845 } 00:20:56.845 }' 00:20:56.845 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.845 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.845 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.845 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.104 10:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.104 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.104 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.363 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.363 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:57.363 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.363 "name": "pt4", 00:20:57.363 "aliases": [ 00:20:57.363 "00000000-0000-0000-0000-000000000004" 00:20:57.363 ], 00:20:57.363 "product_name": "passthru", 00:20:57.363 "block_size": 512, 00:20:57.363 "num_blocks": 65536, 00:20:57.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:57.363 "assigned_rate_limits": { 00:20:57.363 "rw_ios_per_sec": 0, 00:20:57.363 "rw_mbytes_per_sec": 0, 00:20:57.363 "r_mbytes_per_sec": 0, 00:20:57.363 "w_mbytes_per_sec": 0 00:20:57.363 }, 00:20:57.363 "claimed": true, 00:20:57.363 "claim_type": "exclusive_write", 00:20:57.363 "zoned": false, 00:20:57.363 "supported_io_types": { 00:20:57.363 "read": true, 00:20:57.363 "write": true, 00:20:57.363 "unmap": true, 00:20:57.363 "flush": true, 00:20:57.363 "reset": true, 00:20:57.363 "nvme_admin": false, 00:20:57.363 "nvme_io": false, 00:20:57.363 "nvme_io_md": false, 00:20:57.363 "write_zeroes": true, 00:20:57.363 "zcopy": true, 00:20:57.363 "get_zone_info": false, 00:20:57.363 "zone_management": false, 00:20:57.363 "zone_append": false, 00:20:57.363 "compare": false, 00:20:57.363 "compare_and_write": false, 00:20:57.363 "abort": true, 00:20:57.363 "seek_hole": false, 00:20:57.363 "seek_data": false, 00:20:57.363 "copy": true, 00:20:57.363 "nvme_iov_md": false 00:20:57.363 }, 00:20:57.363 "memory_domains": [ 00:20:57.363 { 00:20:57.363 "dma_device_id": "system", 00:20:57.363 "dma_device_type": 1 00:20:57.363 }, 00:20:57.363 { 00:20:57.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.363 "dma_device_type": 2 00:20:57.363 } 00:20:57.363 ], 00:20:57.363 "driver_specific": { 00:20:57.363 "passthru": { 00:20:57.363 "name": "pt4", 00:20:57.363 "base_bdev_name": "malloc4" 00:20:57.363 } 00:20:57.363 } 00:20:57.363 }' 00:20:57.363 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.623 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:20:57.882 [2024-07-26 10:31:10.742091] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 67ea419f-e027-4ba1-b466-3717e69f325f '!=' 67ea419f-e027-4ba1-b466-3717e69f325f ']' 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3430270 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3430270 ']' 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3430270 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.882 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3430270 00:20:58.142 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:58.142 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:58.142 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3430270' 00:20:58.142 killing process with pid 3430270 00:20:58.142 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3430270 00:20:58.142 [2024-07-26 10:31:10.821457] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:58.142 [2024-07-26 10:31:10.821521] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.142 [2024-07-26 10:31:10.821578] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.142 [2024-07-26 10:31:10.821589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x107eed0 name raid_bdev1, state offline 00:20:58.142 10:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3430270 00:20:58.142 [2024-07-26 10:31:10.853120] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.142 10:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:20:58.142 00:20:58.142 real 0m15.198s 00:20:58.142 user 0m27.461s 00:20:58.142 sys 0m2.764s 00:20:58.142 10:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.142 10:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.142 ************************************ 00:20:58.142 END TEST raid_superblock_test 00:20:58.142 ************************************ 00:20:58.402 10:31:11 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:20:58.402 10:31:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:58.402 10:31:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.402 10:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.402 ************************************ 00:20:58.402 START TEST raid_read_error_test 00:20:58.402 ************************************ 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ikZ4QmkVa7 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3433110 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3433110 /var/tmp/spdk-raid.sock 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3433110 ']' 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.402 10:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.402 [2024-07-26 10:31:11.173038] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:20:58.402 [2024-07-26 10:31:11.173091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433110 ] 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.402 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:58.402 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:58.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:58.403 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:58.662 [2024-07-26 10:31:11.305401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.662 [2024-07-26 10:31:11.350006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.662 [2024-07-26 10:31:11.406576] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.662 [2024-07-26 10:31:11.406601] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.231 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.231 10:31:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:59.231 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:59.231 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:59.490 BaseBdev1_malloc 00:20:59.490 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:59.750 true 00:20:59.750 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:00.009 [2024-07-26 10:31:12.740921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:00.009 [2024-07-26 10:31:12.740962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.009 [2024-07-26 10:31:12.740980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb597c0 00:21:00.009 [2024-07-26 10:31:12.740996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.009 [2024-07-26 10:31:12.742563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.009 [2024-07-26 10:31:12.742590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:00.009 BaseBdev1 00:21:00.009 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:00.009 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:00.268 BaseBdev2_malloc 00:21:00.268 10:31:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:00.528 true 00:21:00.528 10:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:00.528 [2024-07-26 10:31:13.411095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:00.528 [2024-07-26 10:31:13.411132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.528 [2024-07-26 10:31:13.411157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb00960 00:21:00.528 [2024-07-26 10:31:13.411169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.528 [2024-07-26 10:31:13.412476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.528 [2024-07-26 10:31:13.412503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:00.528 BaseBdev2 00:21:00.528 10:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:00.528 10:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:00.787 BaseBdev3_malloc 00:21:00.787 10:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:01.045 true 00:21:01.045 10:31:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:01.303 [2024-07-26 10:31:14.089034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:01.303 [2024-07-26 10:31:14.089073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.303 [2024-07-26 10:31:14.089091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb03720 00:21:01.303 [2024-07-26 10:31:14.089103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.303 [2024-07-26 10:31:14.090464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.303 [2024-07-26 10:31:14.090490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:01.303 BaseBdev3 00:21:01.303 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:01.303 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:01.561 BaseBdev4_malloc 00:21:01.561 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:01.820 true 00:21:01.820 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:02.079 [2024-07-26 10:31:14.746931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:02.079 [2024-07-26 10:31:14.746969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.079 [2024-07-26 10:31:14.746993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb028b0 00:21:02.079 [2024-07-26 10:31:14.747004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.079 [2024-07-26 10:31:14.748317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.079 [2024-07-26 10:31:14.748344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:02.079 BaseBdev4 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:02.079 [2024-07-26 10:31:14.959524] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.079 [2024-07-26 10:31:14.960647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.079 [2024-07-26 10:31:14.960708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.079 [2024-07-26 10:31:14.960765] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.079 [2024-07-26 10:31:14.960951] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb05080 00:21:02.079 [2024-07-26 10:31:14.960962] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:02.079 [2024-07-26 10:31:14.961137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0a540 00:21:02.079 [2024-07-26 10:31:14.961274] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb05080 00:21:02.079 [2024-07-26 10:31:14.961283] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb05080 00:21:02.079 [2024-07-26 10:31:14.961389] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.079 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.337 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.337 10:31:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.337 10:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.337 "name": "raid_bdev1", 00:21:02.337 "uuid": "c4a11f35-0aff-4669-a8d5-1e790a4d017e", 00:21:02.337 "strip_size_kb": 64, 00:21:02.337 "state": "online", 00:21:02.338 "raid_level": "raid0", 00:21:02.338 "superblock": true, 00:21:02.338 "num_base_bdevs": 4, 00:21:02.338 "num_base_bdevs_discovered": 4, 00:21:02.338 "num_base_bdevs_operational": 4, 00:21:02.338 "base_bdevs_list": [ 00:21:02.338 { 00:21:02.338 "name": "BaseBdev1", 00:21:02.338 "uuid": "65d20de0-2b29-57f3-a7f9-85df81451527", 00:21:02.338 "is_configured": true, 00:21:02.338 "data_offset": 2048, 00:21:02.338 "data_size": 63488 00:21:02.338 }, 00:21:02.338 { 00:21:02.338 "name": "BaseBdev2", 00:21:02.338 "uuid": "8e915aca-56d8-5105-8a75-969851b957c7", 00:21:02.338 "is_configured": true, 00:21:02.338 "data_offset": 2048, 00:21:02.338 "data_size": 63488 00:21:02.338 }, 00:21:02.338 { 00:21:02.338 "name": "BaseBdev3", 00:21:02.338 "uuid": "c4e474ee-4a4f-5df2-aa00-a4b319e173b9", 00:21:02.338 "is_configured": true, 00:21:02.338 "data_offset": 2048, 00:21:02.338 "data_size": 63488 00:21:02.338 }, 00:21:02.338 { 00:21:02.338 "name": "BaseBdev4", 00:21:02.338 "uuid": "902964e8-8d48-5783-8af7-82b31f2eeb33", 00:21:02.338 "is_configured": true, 00:21:02.338 "data_offset": 2048, 00:21:02.338 "data_size": 63488 00:21:02.338 } 00:21:02.338 ] 00:21:02.338 }' 00:21:02.338 10:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.338 10:31:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 10:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:02.906 10:31:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:03.165 [2024-07-26 10:31:15.878195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xaffd30 00:21:04.102 10:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.362 "name": "raid_bdev1", 00:21:04.362 "uuid": "c4a11f35-0aff-4669-a8d5-1e790a4d017e", 00:21:04.362 "strip_size_kb": 64, 00:21:04.362 "state": "online", 00:21:04.362 "raid_level": "raid0", 00:21:04.362 "superblock": true, 00:21:04.362 "num_base_bdevs": 4, 00:21:04.362 "num_base_bdevs_discovered": 4, 00:21:04.362 "num_base_bdevs_operational": 4, 00:21:04.362 "base_bdevs_list": [ 00:21:04.362 { 00:21:04.362 "name": "BaseBdev1", 00:21:04.362 "uuid": "65d20de0-2b29-57f3-a7f9-85df81451527", 00:21:04.362 "is_configured": true, 00:21:04.362 "data_offset": 2048, 00:21:04.362 "data_size": 63488 00:21:04.362 }, 00:21:04.362 { 00:21:04.362 "name": "BaseBdev2", 00:21:04.362 "uuid": "8e915aca-56d8-5105-8a75-969851b957c7", 00:21:04.362 "is_configured": true, 00:21:04.362 "data_offset": 2048, 00:21:04.362 "data_size": 63488 00:21:04.362 }, 00:21:04.362 { 00:21:04.362 "name": "BaseBdev3", 00:21:04.362 "uuid": "c4e474ee-4a4f-5df2-aa00-a4b319e173b9", 00:21:04.362 "is_configured": true, 00:21:04.362 "data_offset": 2048, 00:21:04.362 "data_size": 63488 00:21:04.362 }, 00:21:04.362 { 00:21:04.362 "name": "BaseBdev4", 00:21:04.362 "uuid": "902964e8-8d48-5783-8af7-82b31f2eeb33", 00:21:04.362 "is_configured": true, 00:21:04.362 "data_offset": 2048, 00:21:04.362 "data_size": 63488 00:21:04.362 } 00:21:04.362 ] 00:21:04.362 }' 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.362 10:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.931 10:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:05.190 [2024-07-26 10:31:18.009378] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.190 [2024-07-26 10:31:18.009408] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.190 [2024-07-26 10:31:18.012318] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.190 [2024-07-26 10:31:18.012355] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.190 [2024-07-26 10:31:18.012393] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.190 [2024-07-26 10:31:18.012403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb05080 name raid_bdev1, state offline 00:21:05.190 0 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3433110 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3433110 ']' 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3433110 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3433110 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3433110' 00:21:05.190 killing process with pid 3433110 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3433110 00:21:05.190 [2024-07-26 10:31:18.085053] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:05.190 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3433110 00:21:05.449 [2024-07-26 10:31:18.111348] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ikZ4QmkVa7 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:21:05.449 00:21:05.449 real 0m7.204s 00:21:05.449 user 0m11.505s 00:21:05.449 sys 0m1.245s 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.449 10:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.449 ************************************ 00:21:05.449 END TEST raid_read_error_test 00:21:05.449 ************************************ 00:21:05.449 10:31:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:21:05.449 10:31:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:05.449 10:31:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.449 10:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.710 ************************************ 00:21:05.710 START TEST raid_write_error_test 00:21:05.710 ************************************ 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.MQ0sCbbfSW 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3434524 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3434524 /var/tmp/spdk-raid.sock 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3434524 ']' 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:05.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.710 10:31:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:05.710 [2024-07-26 10:31:18.454914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:21:05.710 [2024-07-26 10:31:18.454969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434524 ] 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:05.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:05.710 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:05.710 [2024-07-26 10:31:18.588580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.969 [2024-07-26 10:31:18.632867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.970 [2024-07-26 10:31:18.694301] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.970 [2024-07-26 10:31:18.694339] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.538 10:31:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.538 10:31:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:06.538 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:06.538 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:06.797 BaseBdev1_malloc 00:21:06.797 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:06.797 true 00:21:06.797 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:07.056 [2024-07-26 10:31:19.798338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:07.056 [2024-07-26 10:31:19.798379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.056 [2024-07-26 10:31:19.798397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26697c0 00:21:07.056 [2024-07-26 10:31:19.798409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.056 [2024-07-26 10:31:19.799953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.057 [2024-07-26 10:31:19.799980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:07.057 BaseBdev1 00:21:07.057 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:07.057 10:31:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:07.316 BaseBdev2_malloc 00:21:07.316 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:07.575 true 00:21:07.575 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:07.575 [2024-07-26 10:31:20.400235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:07.575 [2024-07-26 10:31:20.400292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.575 [2024-07-26 10:31:20.400311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2610960 00:21:07.575 [2024-07-26 10:31:20.400323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.575 [2024-07-26 10:31:20.401673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.575 [2024-07-26 10:31:20.401698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:07.575 BaseBdev2 00:21:07.575 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:07.575 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:07.835 BaseBdev3_malloc 00:21:07.835 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:08.094 true 00:21:08.094 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:08.094 [2024-07-26 10:31:20.953879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:08.094 [2024-07-26 10:31:20.953916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.094 [2024-07-26 10:31:20.953933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2613720 00:21:08.094 [2024-07-26 10:31:20.953945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.094 [2024-07-26 10:31:20.955209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.094 [2024-07-26 10:31:20.955235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:08.094 BaseBdev3 00:21:08.094 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:08.094 10:31:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:08.353 BaseBdev4_malloc 00:21:08.354 10:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:21:08.922 true 00:21:08.922 10:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:09.180 [2024-07-26 10:31:21.900674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:09.180 [2024-07-26 10:31:21.900713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.180 [2024-07-26 10:31:21.900732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26128b0 00:21:09.180 [2024-07-26 10:31:21.900744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.180 [2024-07-26 10:31:21.902088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.180 [2024-07-26 10:31:21.902114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:09.180 BaseBdev4 00:21:09.180 10:31:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:21:09.748 [2024-07-26 10:31:22.393986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.748 [2024-07-26 10:31:22.395169] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.748 [2024-07-26 10:31:22.395231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.748 [2024-07-26 10:31:22.395288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.748 [2024-07-26 10:31:22.395474] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2615080 00:21:09.748 [2024-07-26 10:31:22.395484] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:09.748 [2024-07-26 10:31:22.395664] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x261a540 00:21:09.748 [2024-07-26 10:31:22.395795] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2615080 00:21:09.748 [2024-07-26 10:31:22.395804] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2615080 00:21:09.748 [2024-07-26 10:31:22.395912] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.748 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.007 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.007 "name": "raid_bdev1", 00:21:10.007 "uuid": "e5434776-409f-471f-ba49-7ba26c766f41", 00:21:10.007 "strip_size_kb": 64, 00:21:10.007 "state": "online", 00:21:10.007 "raid_level": "raid0", 00:21:10.007 "superblock": true, 00:21:10.007 "num_base_bdevs": 4, 00:21:10.007 "num_base_bdevs_discovered": 4, 00:21:10.007 "num_base_bdevs_operational": 4, 00:21:10.007 "base_bdevs_list": [ 00:21:10.007 { 00:21:10.007 "name": "BaseBdev1", 00:21:10.007 "uuid": "25999fac-7c0b-5eef-bd81-f15f9ae7ac43", 00:21:10.007 "is_configured": true, 00:21:10.007 "data_offset": 2048, 00:21:10.007 "data_size": 63488 00:21:10.007 }, 00:21:10.007 { 00:21:10.007 "name": "BaseBdev2", 00:21:10.007 "uuid": "e84ffe51-f440-5aaf-9dda-ff7a26401118", 00:21:10.007 "is_configured": true, 00:21:10.007 "data_offset": 2048, 00:21:10.007 "data_size": 63488 00:21:10.007 }, 00:21:10.007 { 00:21:10.007 "name": "BaseBdev3", 00:21:10.007 "uuid": "613c46c8-b031-516f-9899-7679837eb3e9", 00:21:10.007 "is_configured": true, 00:21:10.007 "data_offset": 2048, 00:21:10.007 "data_size": 63488 00:21:10.007 }, 00:21:10.007 { 00:21:10.007 "name": "BaseBdev4", 00:21:10.007 "uuid": "4aa61183-d70c-5075-8bb2-d32afc555fd1", 00:21:10.007 "is_configured": true, 00:21:10.007 "data_offset": 2048, 00:21:10.007 "data_size": 63488 00:21:10.007 } 00:21:10.007 ] 00:21:10.007 }' 00:21:10.007 10:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.007 10:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.575 10:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:10.575 10:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:10.575 [2024-07-26 10:31:23.300594] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x260fd30 00:21:11.511 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.769 "name": "raid_bdev1", 00:21:11.769 "uuid": "e5434776-409f-471f-ba49-7ba26c766f41", 00:21:11.769 "strip_size_kb": 64, 00:21:11.769 "state": "online", 00:21:11.769 "raid_level": "raid0", 00:21:11.769 "superblock": true, 00:21:11.769 "num_base_bdevs": 4, 00:21:11.769 "num_base_bdevs_discovered": 4, 00:21:11.769 "num_base_bdevs_operational": 4, 00:21:11.769 "base_bdevs_list": [ 00:21:11.769 { 00:21:11.769 "name": "BaseBdev1", 00:21:11.769 "uuid": "25999fac-7c0b-5eef-bd81-f15f9ae7ac43", 00:21:11.769 "is_configured": true, 00:21:11.769 "data_offset": 2048, 00:21:11.769 "data_size": 63488 00:21:11.769 }, 00:21:11.769 { 00:21:11.769 "name": "BaseBdev2", 00:21:11.769 "uuid": "e84ffe51-f440-5aaf-9dda-ff7a26401118", 00:21:11.769 "is_configured": true, 00:21:11.769 "data_offset": 2048, 00:21:11.769 "data_size": 63488 00:21:11.769 }, 00:21:11.769 { 00:21:11.769 "name": "BaseBdev3", 00:21:11.769 "uuid": "613c46c8-b031-516f-9899-7679837eb3e9", 00:21:11.769 "is_configured": true, 00:21:11.769 "data_offset": 2048, 00:21:11.769 "data_size": 63488 00:21:11.769 }, 00:21:11.769 { 00:21:11.769 "name": "BaseBdev4", 00:21:11.769 "uuid": "4aa61183-d70c-5075-8bb2-d32afc555fd1", 00:21:11.769 "is_configured": true, 00:21:11.769 "data_offset": 2048, 00:21:11.769 "data_size": 63488 00:21:11.769 } 00:21:11.769 ] 00:21:11.769 }' 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.769 10:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.337 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:12.596 [2024-07-26 10:31:25.406838] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.596 [2024-07-26 10:31:25.406872] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:12.596 [2024-07-26 10:31:25.409784] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.596 [2024-07-26 10:31:25.409822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.596 [2024-07-26 10:31:25.409859] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.596 [2024-07-26 10:31:25.409870] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2615080 name raid_bdev1, state offline 00:21:12.596 0 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3434524 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3434524 ']' 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3434524 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434524 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434524' 00:21:12.596 killing process with pid 3434524 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3434524 00:21:12.596 [2024-07-26 10:31:25.478212] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.596 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3434524 00:21:12.856 [2024-07-26 10:31:25.504602] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.MQ0sCbbfSW 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:21:12.856 00:21:12.856 real 0m7.315s 00:21:12.856 user 0m11.696s 00:21:12.856 sys 0m1.265s 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.856 10:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.856 ************************************ 00:21:12.856 END TEST raid_write_error_test 00:21:12.856 ************************************ 00:21:12.856 10:31:25 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:21:12.856 10:31:25 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:12.856 10:31:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:12.856 10:31:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.856 10:31:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.115 ************************************ 00:21:13.115 START TEST raid_state_function_test 00:21:13.115 ************************************ 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:13.115 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3435789 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3435789' 00:21:13.116 Process raid pid: 3435789 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3435789 /var/tmp/spdk-raid.sock 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3435789 ']' 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:13.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.116 10:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.116 [2024-07-26 10:31:25.848037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:21:13.116 [2024-07-26 10:31:25.848085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:13.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.116 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:13.116 [2024-07-26 10:31:25.970049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.116 [2024-07-26 10:31:26.013898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.375 [2024-07-26 10:31:26.069230] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.375 [2024-07-26 10:31:26.069258] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.944 10:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.944 10:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:13.944 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:14.203 [2024-07-26 10:31:26.848613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:14.203 [2024-07-26 10:31:26.848652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:14.203 [2024-07-26 10:31:26.848662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:14.203 [2024-07-26 10:31:26.848673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:14.203 [2024-07-26 10:31:26.848681] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:14.203 [2024-07-26 10:31:26.848691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:14.203 [2024-07-26 10:31:26.848699] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:14.203 [2024-07-26 10:31:26.848709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:14.203 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:14.203 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:14.203 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.203 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:14.203 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.204 10:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.204 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.204 "name": "Existed_Raid", 00:21:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.204 "strip_size_kb": 64, 00:21:14.204 "state": "configuring", 00:21:14.204 "raid_level": "concat", 00:21:14.204 "superblock": false, 00:21:14.204 "num_base_bdevs": 4, 00:21:14.204 "num_base_bdevs_discovered": 0, 00:21:14.204 "num_base_bdevs_operational": 4, 00:21:14.204 "base_bdevs_list": [ 00:21:14.204 { 00:21:14.204 "name": "BaseBdev1", 00:21:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.204 "is_configured": false, 00:21:14.204 "data_offset": 0, 00:21:14.204 "data_size": 0 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev2", 00:21:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.204 "is_configured": false, 00:21:14.204 "data_offset": 0, 00:21:14.204 "data_size": 0 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev3", 00:21:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.204 "is_configured": false, 00:21:14.204 "data_offset": 0, 00:21:14.204 "data_size": 0 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev4", 00:21:14.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.204 "is_configured": false, 00:21:14.204 "data_offset": 0, 00:21:14.204 "data_size": 0 00:21:14.204 } 00:21:14.204 ] 00:21:14.204 }' 00:21:14.204 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.204 10:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.772 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:15.031 [2024-07-26 10:31:27.807000] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.031 [2024-07-26 10:31:27.807030] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x980b70 name Existed_Raid, state configuring 00:21:15.031 10:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:15.599 [2024-07-26 10:31:28.304333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.599 [2024-07-26 10:31:28.304366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.599 [2024-07-26 10:31:28.304375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.599 [2024-07-26 10:31:28.304386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.599 [2024-07-26 10:31:28.304394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:15.599 [2024-07-26 10:31:28.304403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:15.599 [2024-07-26 10:31:28.304411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:15.599 [2024-07-26 10:31:28.304421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:15.599 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:15.859 [2024-07-26 10:31:28.554310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.859 BaseBdev1 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:15.859 10:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:16.427 10:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:16.686 [ 00:21:16.687 { 00:21:16.687 "name": "BaseBdev1", 00:21:16.687 "aliases": [ 00:21:16.687 "81c469cd-52b8-4539-81b7-fb773660e634" 00:21:16.687 ], 00:21:16.687 "product_name": "Malloc disk", 00:21:16.687 "block_size": 512, 00:21:16.687 "num_blocks": 65536, 00:21:16.687 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:16.687 "assigned_rate_limits": { 00:21:16.687 "rw_ios_per_sec": 0, 00:21:16.687 "rw_mbytes_per_sec": 0, 00:21:16.687 "r_mbytes_per_sec": 0, 00:21:16.687 "w_mbytes_per_sec": 0 00:21:16.687 }, 00:21:16.687 "claimed": true, 00:21:16.687 "claim_type": "exclusive_write", 00:21:16.687 "zoned": false, 00:21:16.687 "supported_io_types": { 00:21:16.687 "read": true, 00:21:16.687 "write": true, 00:21:16.687 "unmap": true, 00:21:16.687 "flush": true, 00:21:16.687 "reset": true, 00:21:16.687 "nvme_admin": false, 00:21:16.687 "nvme_io": false, 00:21:16.687 "nvme_io_md": false, 00:21:16.687 "write_zeroes": true, 00:21:16.687 "zcopy": true, 00:21:16.687 "get_zone_info": false, 00:21:16.687 "zone_management": false, 00:21:16.687 "zone_append": false, 00:21:16.687 "compare": false, 00:21:16.687 "compare_and_write": false, 00:21:16.687 "abort": true, 00:21:16.687 "seek_hole": false, 00:21:16.687 "seek_data": false, 00:21:16.687 "copy": true, 00:21:16.687 "nvme_iov_md": false 00:21:16.687 }, 00:21:16.687 "memory_domains": [ 00:21:16.687 { 00:21:16.687 "dma_device_id": "system", 00:21:16.687 "dma_device_type": 1 00:21:16.687 }, 00:21:16.687 { 00:21:16.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.687 "dma_device_type": 2 00:21:16.687 } 00:21:16.687 ], 00:21:16.687 "driver_specific": {} 00:21:16.687 } 00:21:16.687 ] 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.966 "name": "Existed_Raid", 00:21:16.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.966 "strip_size_kb": 64, 00:21:16.966 "state": "configuring", 00:21:16.966 "raid_level": "concat", 00:21:16.966 "superblock": false, 00:21:16.966 "num_base_bdevs": 4, 00:21:16.966 "num_base_bdevs_discovered": 1, 00:21:16.966 "num_base_bdevs_operational": 4, 00:21:16.966 "base_bdevs_list": [ 00:21:16.966 { 00:21:16.966 "name": "BaseBdev1", 00:21:16.966 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:16.966 "is_configured": true, 00:21:16.966 "data_offset": 0, 00:21:16.966 "data_size": 65536 00:21:16.966 }, 00:21:16.966 { 00:21:16.966 "name": "BaseBdev2", 00:21:16.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.966 "is_configured": false, 00:21:16.966 "data_offset": 0, 00:21:16.966 "data_size": 0 00:21:16.966 }, 00:21:16.966 { 00:21:16.966 "name": "BaseBdev3", 00:21:16.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.966 "is_configured": false, 00:21:16.966 "data_offset": 0, 00:21:16.966 "data_size": 0 00:21:16.966 }, 00:21:16.966 { 00:21:16.966 "name": "BaseBdev4", 00:21:16.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.966 "is_configured": false, 00:21:16.966 "data_offset": 0, 00:21:16.966 "data_size": 0 00:21:16.966 } 00:21:16.966 ] 00:21:16.966 }' 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.966 10:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.548 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:17.807 [2024-07-26 10:31:30.619752] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.807 [2024-07-26 10:31:30.619792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9804a0 name Existed_Raid, state configuring 00:21:17.807 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:18.066 [2024-07-26 10:31:30.848684] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:18.066 [2024-07-26 10:31:30.850072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:18.066 [2024-07-26 10:31:30.850106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:18.066 [2024-07-26 10:31:30.850115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:18.066 [2024-07-26 10:31:30.850126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:18.066 [2024-07-26 10:31:30.850134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:18.066 [2024-07-26 10:31:30.850155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:18.066 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:18.066 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:18.066 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:18.066 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.067 10:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.326 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.326 "name": "Existed_Raid", 00:21:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.326 "strip_size_kb": 64, 00:21:18.326 "state": "configuring", 00:21:18.326 "raid_level": "concat", 00:21:18.326 "superblock": false, 00:21:18.326 "num_base_bdevs": 4, 00:21:18.326 "num_base_bdevs_discovered": 1, 00:21:18.326 "num_base_bdevs_operational": 4, 00:21:18.326 "base_bdevs_list": [ 00:21:18.326 { 00:21:18.326 "name": "BaseBdev1", 00:21:18.326 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:18.326 "is_configured": true, 00:21:18.326 "data_offset": 0, 00:21:18.326 "data_size": 65536 00:21:18.326 }, 00:21:18.326 { 00:21:18.326 "name": "BaseBdev2", 00:21:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.326 "is_configured": false, 00:21:18.326 "data_offset": 0, 00:21:18.326 "data_size": 0 00:21:18.326 }, 00:21:18.326 { 00:21:18.326 "name": "BaseBdev3", 00:21:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.326 "is_configured": false, 00:21:18.326 "data_offset": 0, 00:21:18.326 "data_size": 0 00:21:18.326 }, 00:21:18.326 { 00:21:18.326 "name": "BaseBdev4", 00:21:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.326 "is_configured": false, 00:21:18.326 "data_offset": 0, 00:21:18.326 "data_size": 0 00:21:18.326 } 00:21:18.326 ] 00:21:18.326 }' 00:21:18.326 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.326 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.895 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:19.154 [2024-07-26 10:31:31.882522] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:19.154 BaseBdev2 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:19.154 10:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:19.413 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:19.673 [ 00:21:19.673 { 00:21:19.673 "name": "BaseBdev2", 00:21:19.673 "aliases": [ 00:21:19.673 "5791f645-77f3-489f-b101-9093f299ab88" 00:21:19.673 ], 00:21:19.673 "product_name": "Malloc disk", 00:21:19.673 "block_size": 512, 00:21:19.673 "num_blocks": 65536, 00:21:19.673 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:19.673 "assigned_rate_limits": { 00:21:19.673 "rw_ios_per_sec": 0, 00:21:19.673 "rw_mbytes_per_sec": 0, 00:21:19.673 "r_mbytes_per_sec": 0, 00:21:19.673 "w_mbytes_per_sec": 0 00:21:19.673 }, 00:21:19.673 "claimed": true, 00:21:19.673 "claim_type": "exclusive_write", 00:21:19.673 "zoned": false, 00:21:19.673 "supported_io_types": { 00:21:19.673 "read": true, 00:21:19.673 "write": true, 00:21:19.673 "unmap": true, 00:21:19.673 "flush": true, 00:21:19.673 "reset": true, 00:21:19.673 "nvme_admin": false, 00:21:19.673 "nvme_io": false, 00:21:19.673 "nvme_io_md": false, 00:21:19.673 "write_zeroes": true, 00:21:19.673 "zcopy": true, 00:21:19.673 "get_zone_info": false, 00:21:19.673 "zone_management": false, 00:21:19.673 "zone_append": false, 00:21:19.673 "compare": false, 00:21:19.673 "compare_and_write": false, 00:21:19.673 "abort": true, 00:21:19.673 "seek_hole": false, 00:21:19.673 "seek_data": false, 00:21:19.673 "copy": true, 00:21:19.673 "nvme_iov_md": false 00:21:19.673 }, 00:21:19.673 "memory_domains": [ 00:21:19.673 { 00:21:19.673 "dma_device_id": "system", 00:21:19.673 "dma_device_type": 1 00:21:19.673 }, 00:21:19.673 { 00:21:19.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.673 "dma_device_type": 2 00:21:19.674 } 00:21:19.674 ], 00:21:19.674 "driver_specific": {} 00:21:19.674 } 00:21:19.674 ] 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.674 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.933 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.933 "name": "Existed_Raid", 00:21:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.933 "strip_size_kb": 64, 00:21:19.933 "state": "configuring", 00:21:19.933 "raid_level": "concat", 00:21:19.933 "superblock": false, 00:21:19.933 "num_base_bdevs": 4, 00:21:19.933 "num_base_bdevs_discovered": 2, 00:21:19.933 "num_base_bdevs_operational": 4, 00:21:19.933 "base_bdevs_list": [ 00:21:19.933 { 00:21:19.933 "name": "BaseBdev1", 00:21:19.933 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:19.933 "is_configured": true, 00:21:19.933 "data_offset": 0, 00:21:19.933 "data_size": 65536 00:21:19.933 }, 00:21:19.933 { 00:21:19.933 "name": "BaseBdev2", 00:21:19.933 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:19.933 "is_configured": true, 00:21:19.933 "data_offset": 0, 00:21:19.933 "data_size": 65536 00:21:19.933 }, 00:21:19.933 { 00:21:19.933 "name": "BaseBdev3", 00:21:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.933 "is_configured": false, 00:21:19.933 "data_offset": 0, 00:21:19.933 "data_size": 0 00:21:19.933 }, 00:21:19.933 { 00:21:19.933 "name": "BaseBdev4", 00:21:19.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.933 "is_configured": false, 00:21:19.933 "data_offset": 0, 00:21:19.933 "data_size": 0 00:21:19.933 } 00:21:19.933 ] 00:21:19.933 }' 00:21:19.933 10:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.933 10:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:20.502 [2024-07-26 10:31:33.341490] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.502 BaseBdev3 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:20.502 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.760 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:21.018 [ 00:21:21.018 { 00:21:21.018 "name": "BaseBdev3", 00:21:21.018 "aliases": [ 00:21:21.018 "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071" 00:21:21.018 ], 00:21:21.018 "product_name": "Malloc disk", 00:21:21.018 "block_size": 512, 00:21:21.018 "num_blocks": 65536, 00:21:21.018 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:21.018 "assigned_rate_limits": { 00:21:21.018 "rw_ios_per_sec": 0, 00:21:21.018 "rw_mbytes_per_sec": 0, 00:21:21.018 "r_mbytes_per_sec": 0, 00:21:21.018 "w_mbytes_per_sec": 0 00:21:21.018 }, 00:21:21.018 "claimed": true, 00:21:21.018 "claim_type": "exclusive_write", 00:21:21.018 "zoned": false, 00:21:21.018 "supported_io_types": { 00:21:21.018 "read": true, 00:21:21.018 "write": true, 00:21:21.018 "unmap": true, 00:21:21.018 "flush": true, 00:21:21.018 "reset": true, 00:21:21.018 "nvme_admin": false, 00:21:21.018 "nvme_io": false, 00:21:21.018 "nvme_io_md": false, 00:21:21.018 "write_zeroes": true, 00:21:21.018 "zcopy": true, 00:21:21.018 "get_zone_info": false, 00:21:21.018 "zone_management": false, 00:21:21.018 "zone_append": false, 00:21:21.018 "compare": false, 00:21:21.018 "compare_and_write": false, 00:21:21.018 "abort": true, 00:21:21.018 "seek_hole": false, 00:21:21.018 "seek_data": false, 00:21:21.018 "copy": true, 00:21:21.018 "nvme_iov_md": false 00:21:21.018 }, 00:21:21.018 "memory_domains": [ 00:21:21.018 { 00:21:21.018 "dma_device_id": "system", 00:21:21.018 "dma_device_type": 1 00:21:21.018 }, 00:21:21.018 { 00:21:21.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.018 "dma_device_type": 2 00:21:21.018 } 00:21:21.018 ], 00:21:21.018 "driver_specific": {} 00:21:21.018 } 00:21:21.018 ] 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.018 10:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.278 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.278 "name": "Existed_Raid", 00:21:21.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.278 "strip_size_kb": 64, 00:21:21.278 "state": "configuring", 00:21:21.278 "raid_level": "concat", 00:21:21.278 "superblock": false, 00:21:21.278 "num_base_bdevs": 4, 00:21:21.278 "num_base_bdevs_discovered": 3, 00:21:21.278 "num_base_bdevs_operational": 4, 00:21:21.278 "base_bdevs_list": [ 00:21:21.278 { 00:21:21.278 "name": "BaseBdev1", 00:21:21.278 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:21.278 "is_configured": true, 00:21:21.278 "data_offset": 0, 00:21:21.278 "data_size": 65536 00:21:21.278 }, 00:21:21.278 { 00:21:21.278 "name": "BaseBdev2", 00:21:21.278 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:21.278 "is_configured": true, 00:21:21.278 "data_offset": 0, 00:21:21.278 "data_size": 65536 00:21:21.278 }, 00:21:21.278 { 00:21:21.278 "name": "BaseBdev3", 00:21:21.278 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:21.278 "is_configured": true, 00:21:21.278 "data_offset": 0, 00:21:21.278 "data_size": 65536 00:21:21.278 }, 00:21:21.278 { 00:21:21.278 "name": "BaseBdev4", 00:21:21.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.278 "is_configured": false, 00:21:21.278 "data_offset": 0, 00:21:21.278 "data_size": 0 00:21:21.278 } 00:21:21.278 ] 00:21:21.278 }' 00:21:21.278 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.278 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.846 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:22.106 [2024-07-26 10:31:34.820496] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:22.106 [2024-07-26 10:31:34.820529] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb333e0 00:21:22.106 [2024-07-26 10:31:34.820537] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:22.106 [2024-07-26 10:31:34.820712] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x97a6e0 00:21:22.106 [2024-07-26 10:31:34.820819] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb333e0 00:21:22.106 [2024-07-26 10:31:34.820828] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb333e0 00:21:22.106 [2024-07-26 10:31:34.820978] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.106 BaseBdev4 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:22.106 10:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:22.365 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:22.624 [ 00:21:22.624 { 00:21:22.624 "name": "BaseBdev4", 00:21:22.624 "aliases": [ 00:21:22.624 "992d6e69-285b-49ae-8404-a535e58d44b6" 00:21:22.624 ], 00:21:22.624 "product_name": "Malloc disk", 00:21:22.624 "block_size": 512, 00:21:22.624 "num_blocks": 65536, 00:21:22.624 "uuid": "992d6e69-285b-49ae-8404-a535e58d44b6", 00:21:22.624 "assigned_rate_limits": { 00:21:22.624 "rw_ios_per_sec": 0, 00:21:22.624 "rw_mbytes_per_sec": 0, 00:21:22.624 "r_mbytes_per_sec": 0, 00:21:22.624 "w_mbytes_per_sec": 0 00:21:22.624 }, 00:21:22.624 "claimed": true, 00:21:22.624 "claim_type": "exclusive_write", 00:21:22.624 "zoned": false, 00:21:22.624 "supported_io_types": { 00:21:22.624 "read": true, 00:21:22.624 "write": true, 00:21:22.624 "unmap": true, 00:21:22.624 "flush": true, 00:21:22.624 "reset": true, 00:21:22.624 "nvme_admin": false, 00:21:22.624 "nvme_io": false, 00:21:22.624 "nvme_io_md": false, 00:21:22.624 "write_zeroes": true, 00:21:22.624 "zcopy": true, 00:21:22.624 "get_zone_info": false, 00:21:22.624 "zone_management": false, 00:21:22.624 "zone_append": false, 00:21:22.624 "compare": false, 00:21:22.624 "compare_and_write": false, 00:21:22.624 "abort": true, 00:21:22.624 "seek_hole": false, 00:21:22.624 "seek_data": false, 00:21:22.624 "copy": true, 00:21:22.624 "nvme_iov_md": false 00:21:22.624 }, 00:21:22.624 "memory_domains": [ 00:21:22.624 { 00:21:22.624 "dma_device_id": "system", 00:21:22.624 "dma_device_type": 1 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.624 "dma_device_type": 2 00:21:22.624 } 00:21:22.624 ], 00:21:22.624 "driver_specific": {} 00:21:22.624 } 00:21:22.624 ] 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.624 "name": "Existed_Raid", 00:21:22.624 "uuid": "ea12eaab-73e1-4b37-af7f-2b6759501da2", 00:21:22.624 "strip_size_kb": 64, 00:21:22.624 "state": "online", 00:21:22.624 "raid_level": "concat", 00:21:22.624 "superblock": false, 00:21:22.624 "num_base_bdevs": 4, 00:21:22.624 "num_base_bdevs_discovered": 4, 00:21:22.624 "num_base_bdevs_operational": 4, 00:21:22.624 "base_bdevs_list": [ 00:21:22.624 { 00:21:22.624 "name": "BaseBdev1", 00:21:22.624 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:22.624 "is_configured": true, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 65536 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev2", 00:21:22.624 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:22.624 "is_configured": true, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 65536 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev3", 00:21:22.624 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:22.624 "is_configured": true, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 65536 00:21:22.624 }, 00:21:22.624 { 00:21:22.624 "name": "BaseBdev4", 00:21:22.624 "uuid": "992d6e69-285b-49ae-8404-a535e58d44b6", 00:21:22.624 "is_configured": true, 00:21:22.624 "data_offset": 0, 00:21:22.624 "data_size": 65536 00:21:22.624 } 00:21:22.624 ] 00:21:22.624 }' 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.624 10:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:23.191 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:23.450 [2024-07-26 10:31:36.232517] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:23.450 "name": "Existed_Raid", 00:21:23.450 "aliases": [ 00:21:23.450 "ea12eaab-73e1-4b37-af7f-2b6759501da2" 00:21:23.450 ], 00:21:23.450 "product_name": "Raid Volume", 00:21:23.450 "block_size": 512, 00:21:23.450 "num_blocks": 262144, 00:21:23.450 "uuid": "ea12eaab-73e1-4b37-af7f-2b6759501da2", 00:21:23.450 "assigned_rate_limits": { 00:21:23.450 "rw_ios_per_sec": 0, 00:21:23.450 "rw_mbytes_per_sec": 0, 00:21:23.450 "r_mbytes_per_sec": 0, 00:21:23.450 "w_mbytes_per_sec": 0 00:21:23.450 }, 00:21:23.450 "claimed": false, 00:21:23.450 "zoned": false, 00:21:23.450 "supported_io_types": { 00:21:23.450 "read": true, 00:21:23.450 "write": true, 00:21:23.450 "unmap": true, 00:21:23.450 "flush": true, 00:21:23.450 "reset": true, 00:21:23.450 "nvme_admin": false, 00:21:23.450 "nvme_io": false, 00:21:23.450 "nvme_io_md": false, 00:21:23.450 "write_zeroes": true, 00:21:23.450 "zcopy": false, 00:21:23.450 "get_zone_info": false, 00:21:23.450 "zone_management": false, 00:21:23.450 "zone_append": false, 00:21:23.450 "compare": false, 00:21:23.450 "compare_and_write": false, 00:21:23.450 "abort": false, 00:21:23.450 "seek_hole": false, 00:21:23.450 "seek_data": false, 00:21:23.450 "copy": false, 00:21:23.450 "nvme_iov_md": false 00:21:23.450 }, 00:21:23.450 "memory_domains": [ 00:21:23.450 { 00:21:23.450 "dma_device_id": "system", 00:21:23.450 "dma_device_type": 1 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.450 "dma_device_type": 2 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "system", 00:21:23.450 "dma_device_type": 1 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.450 "dma_device_type": 2 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "system", 00:21:23.450 "dma_device_type": 1 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.450 "dma_device_type": 2 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "system", 00:21:23.450 "dma_device_type": 1 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.450 "dma_device_type": 2 00:21:23.450 } 00:21:23.450 ], 00:21:23.450 "driver_specific": { 00:21:23.450 "raid": { 00:21:23.450 "uuid": "ea12eaab-73e1-4b37-af7f-2b6759501da2", 00:21:23.450 "strip_size_kb": 64, 00:21:23.450 "state": "online", 00:21:23.450 "raid_level": "concat", 00:21:23.450 "superblock": false, 00:21:23.450 "num_base_bdevs": 4, 00:21:23.450 "num_base_bdevs_discovered": 4, 00:21:23.450 "num_base_bdevs_operational": 4, 00:21:23.450 "base_bdevs_list": [ 00:21:23.450 { 00:21:23.450 "name": "BaseBdev1", 00:21:23.450 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:23.450 "is_configured": true, 00:21:23.450 "data_offset": 0, 00:21:23.450 "data_size": 65536 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "name": "BaseBdev2", 00:21:23.450 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:23.450 "is_configured": true, 00:21:23.450 "data_offset": 0, 00:21:23.450 "data_size": 65536 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "name": "BaseBdev3", 00:21:23.450 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:23.450 "is_configured": true, 00:21:23.450 "data_offset": 0, 00:21:23.450 "data_size": 65536 00:21:23.450 }, 00:21:23.450 { 00:21:23.450 "name": "BaseBdev4", 00:21:23.450 "uuid": "992d6e69-285b-49ae-8404-a535e58d44b6", 00:21:23.450 "is_configured": true, 00:21:23.450 "data_offset": 0, 00:21:23.450 "data_size": 65536 00:21:23.450 } 00:21:23.450 ] 00:21:23.450 } 00:21:23.450 } 00:21:23.450 }' 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:23.450 BaseBdev2 00:21:23.450 BaseBdev3 00:21:23.450 BaseBdev4' 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:23.450 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:23.709 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:23.709 "name": "BaseBdev1", 00:21:23.709 "aliases": [ 00:21:23.709 "81c469cd-52b8-4539-81b7-fb773660e634" 00:21:23.709 ], 00:21:23.709 "product_name": "Malloc disk", 00:21:23.709 "block_size": 512, 00:21:23.709 "num_blocks": 65536, 00:21:23.709 "uuid": "81c469cd-52b8-4539-81b7-fb773660e634", 00:21:23.709 "assigned_rate_limits": { 00:21:23.709 "rw_ios_per_sec": 0, 00:21:23.709 "rw_mbytes_per_sec": 0, 00:21:23.709 "r_mbytes_per_sec": 0, 00:21:23.709 "w_mbytes_per_sec": 0 00:21:23.709 }, 00:21:23.709 "claimed": true, 00:21:23.709 "claim_type": "exclusive_write", 00:21:23.709 "zoned": false, 00:21:23.709 "supported_io_types": { 00:21:23.709 "read": true, 00:21:23.709 "write": true, 00:21:23.709 "unmap": true, 00:21:23.709 "flush": true, 00:21:23.709 "reset": true, 00:21:23.709 "nvme_admin": false, 00:21:23.709 "nvme_io": false, 00:21:23.709 "nvme_io_md": false, 00:21:23.709 "write_zeroes": true, 00:21:23.709 "zcopy": true, 00:21:23.709 "get_zone_info": false, 00:21:23.709 "zone_management": false, 00:21:23.709 "zone_append": false, 00:21:23.709 "compare": false, 00:21:23.709 "compare_and_write": false, 00:21:23.709 "abort": true, 00:21:23.709 "seek_hole": false, 00:21:23.709 "seek_data": false, 00:21:23.709 "copy": true, 00:21:23.709 "nvme_iov_md": false 00:21:23.709 }, 00:21:23.709 "memory_domains": [ 00:21:23.709 { 00:21:23.709 "dma_device_id": "system", 00:21:23.709 "dma_device_type": 1 00:21:23.709 }, 00:21:23.709 { 00:21:23.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.709 "dma_device_type": 2 00:21:23.709 } 00:21:23.709 ], 00:21:23.709 "driver_specific": {} 00:21:23.709 }' 00:21:23.709 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.709 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.968 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.227 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:24.227 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.227 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:24.227 10:31:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:24.227 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.227 "name": "BaseBdev2", 00:21:24.227 "aliases": [ 00:21:24.227 "5791f645-77f3-489f-b101-9093f299ab88" 00:21:24.227 ], 00:21:24.227 "product_name": "Malloc disk", 00:21:24.227 "block_size": 512, 00:21:24.227 "num_blocks": 65536, 00:21:24.227 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:24.227 "assigned_rate_limits": { 00:21:24.227 "rw_ios_per_sec": 0, 00:21:24.227 "rw_mbytes_per_sec": 0, 00:21:24.227 "r_mbytes_per_sec": 0, 00:21:24.227 "w_mbytes_per_sec": 0 00:21:24.227 }, 00:21:24.227 "claimed": true, 00:21:24.227 "claim_type": "exclusive_write", 00:21:24.227 "zoned": false, 00:21:24.227 "supported_io_types": { 00:21:24.227 "read": true, 00:21:24.227 "write": true, 00:21:24.227 "unmap": true, 00:21:24.227 "flush": true, 00:21:24.227 "reset": true, 00:21:24.227 "nvme_admin": false, 00:21:24.227 "nvme_io": false, 00:21:24.227 "nvme_io_md": false, 00:21:24.227 "write_zeroes": true, 00:21:24.227 "zcopy": true, 00:21:24.227 "get_zone_info": false, 00:21:24.227 "zone_management": false, 00:21:24.227 "zone_append": false, 00:21:24.227 "compare": false, 00:21:24.227 "compare_and_write": false, 00:21:24.227 "abort": true, 00:21:24.227 "seek_hole": false, 00:21:24.227 "seek_data": false, 00:21:24.227 "copy": true, 00:21:24.227 "nvme_iov_md": false 00:21:24.227 }, 00:21:24.227 "memory_domains": [ 00:21:24.227 { 00:21:24.227 "dma_device_id": "system", 00:21:24.227 "dma_device_type": 1 00:21:24.227 }, 00:21:24.227 { 00:21:24.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.227 "dma_device_type": 2 00:21:24.227 } 00:21:24.227 ], 00:21:24.227 "driver_specific": {} 00:21:24.227 }' 00:21:24.227 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.486 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:24.745 "name": "BaseBdev3", 00:21:24.745 "aliases": [ 00:21:24.745 "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071" 00:21:24.745 ], 00:21:24.745 "product_name": "Malloc disk", 00:21:24.745 "block_size": 512, 00:21:24.745 "num_blocks": 65536, 00:21:24.745 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:24.745 "assigned_rate_limits": { 00:21:24.745 "rw_ios_per_sec": 0, 00:21:24.745 "rw_mbytes_per_sec": 0, 00:21:24.745 "r_mbytes_per_sec": 0, 00:21:24.745 "w_mbytes_per_sec": 0 00:21:24.745 }, 00:21:24.745 "claimed": true, 00:21:24.745 "claim_type": "exclusive_write", 00:21:24.745 "zoned": false, 00:21:24.745 "supported_io_types": { 00:21:24.745 "read": true, 00:21:24.745 "write": true, 00:21:24.745 "unmap": true, 00:21:24.745 "flush": true, 00:21:24.745 "reset": true, 00:21:24.745 "nvme_admin": false, 00:21:24.745 "nvme_io": false, 00:21:24.745 "nvme_io_md": false, 00:21:24.745 "write_zeroes": true, 00:21:24.745 "zcopy": true, 00:21:24.745 "get_zone_info": false, 00:21:24.745 "zone_management": false, 00:21:24.745 "zone_append": false, 00:21:24.745 "compare": false, 00:21:24.745 "compare_and_write": false, 00:21:24.745 "abort": true, 00:21:24.745 "seek_hole": false, 00:21:24.745 "seek_data": false, 00:21:24.745 "copy": true, 00:21:24.745 "nvme_iov_md": false 00:21:24.745 }, 00:21:24.745 "memory_domains": [ 00:21:24.745 { 00:21:24.745 "dma_device_id": "system", 00:21:24.745 "dma_device_type": 1 00:21:24.745 }, 00:21:24.745 { 00:21:24.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.745 "dma_device_type": 2 00:21:24.745 } 00:21:24.745 ], 00:21:24.745 "driver_specific": {} 00:21:24.745 }' 00:21:24.745 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.004 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:25.262 10:31:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.521 "name": "BaseBdev4", 00:21:25.521 "aliases": [ 00:21:25.521 "992d6e69-285b-49ae-8404-a535e58d44b6" 00:21:25.521 ], 00:21:25.521 "product_name": "Malloc disk", 00:21:25.521 "block_size": 512, 00:21:25.521 "num_blocks": 65536, 00:21:25.521 "uuid": "992d6e69-285b-49ae-8404-a535e58d44b6", 00:21:25.521 "assigned_rate_limits": { 00:21:25.521 "rw_ios_per_sec": 0, 00:21:25.521 "rw_mbytes_per_sec": 0, 00:21:25.521 "r_mbytes_per_sec": 0, 00:21:25.521 "w_mbytes_per_sec": 0 00:21:25.521 }, 00:21:25.521 "claimed": true, 00:21:25.521 "claim_type": "exclusive_write", 00:21:25.521 "zoned": false, 00:21:25.521 "supported_io_types": { 00:21:25.521 "read": true, 00:21:25.521 "write": true, 00:21:25.521 "unmap": true, 00:21:25.521 "flush": true, 00:21:25.521 "reset": true, 00:21:25.521 "nvme_admin": false, 00:21:25.521 "nvme_io": false, 00:21:25.521 "nvme_io_md": false, 00:21:25.521 "write_zeroes": true, 00:21:25.521 "zcopy": true, 00:21:25.521 "get_zone_info": false, 00:21:25.521 "zone_management": false, 00:21:25.521 "zone_append": false, 00:21:25.521 "compare": false, 00:21:25.521 "compare_and_write": false, 00:21:25.521 "abort": true, 00:21:25.521 "seek_hole": false, 00:21:25.521 "seek_data": false, 00:21:25.521 "copy": true, 00:21:25.521 "nvme_iov_md": false 00:21:25.521 }, 00:21:25.521 "memory_domains": [ 00:21:25.521 { 00:21:25.521 "dma_device_id": "system", 00:21:25.521 "dma_device_type": 1 00:21:25.521 }, 00:21:25.521 { 00:21:25.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.521 "dma_device_type": 2 00:21:25.521 } 00:21:25.521 ], 00:21:25.521 "driver_specific": {} 00:21:25.521 }' 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.521 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.780 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:26.039 [2024-07-26 10:31:38.791075] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.039 [2024-07-26 10:31:38.791101] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:26.039 [2024-07-26 10:31:38.791151] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.039 10:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.298 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.298 "name": "Existed_Raid", 00:21:26.298 "uuid": "ea12eaab-73e1-4b37-af7f-2b6759501da2", 00:21:26.298 "strip_size_kb": 64, 00:21:26.298 "state": "offline", 00:21:26.298 "raid_level": "concat", 00:21:26.298 "superblock": false, 00:21:26.298 "num_base_bdevs": 4, 00:21:26.298 "num_base_bdevs_discovered": 3, 00:21:26.298 "num_base_bdevs_operational": 3, 00:21:26.298 "base_bdevs_list": [ 00:21:26.298 { 00:21:26.298 "name": null, 00:21:26.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.298 "is_configured": false, 00:21:26.298 "data_offset": 0, 00:21:26.298 "data_size": 65536 00:21:26.298 }, 00:21:26.298 { 00:21:26.298 "name": "BaseBdev2", 00:21:26.298 "uuid": "5791f645-77f3-489f-b101-9093f299ab88", 00:21:26.298 "is_configured": true, 00:21:26.298 "data_offset": 0, 00:21:26.298 "data_size": 65536 00:21:26.298 }, 00:21:26.298 { 00:21:26.298 "name": "BaseBdev3", 00:21:26.298 "uuid": "5d7fb0b3-a37a-4e3d-90ae-a83ad35a7071", 00:21:26.298 "is_configured": true, 00:21:26.298 "data_offset": 0, 00:21:26.298 "data_size": 65536 00:21:26.298 }, 00:21:26.298 { 00:21:26.298 "name": "BaseBdev4", 00:21:26.298 "uuid": "992d6e69-285b-49ae-8404-a535e58d44b6", 00:21:26.298 "is_configured": true, 00:21:26.298 "data_offset": 0, 00:21:26.298 "data_size": 65536 00:21:26.298 } 00:21:26.298 ] 00:21:26.298 }' 00:21:26.298 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.298 10:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.869 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:26.869 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:26.869 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.869 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:27.127 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:27.127 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.127 10:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:27.387 [2024-07-26 10:31:40.047359] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.387 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:27.646 [2024-07-26 10:31:40.450229] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:27.646 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:27.646 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:27.646 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.646 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:27.906 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:27.906 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.906 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:28.165 [2024-07-26 10:31:40.913623] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:28.165 [2024-07-26 10:31:40.913666] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb333e0 name Existed_Raid, state offline 00:21:28.165 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:28.165 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:28.165 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.165 10:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:28.425 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.691 BaseBdev2 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:28.691 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.970 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.970 [ 00:21:28.970 { 00:21:28.970 "name": "BaseBdev2", 00:21:28.970 "aliases": [ 00:21:28.970 "631e0dab-d1e2-40f0-a7e4-b51b01218095" 00:21:28.970 ], 00:21:28.970 "product_name": "Malloc disk", 00:21:28.970 "block_size": 512, 00:21:28.970 "num_blocks": 65536, 00:21:28.970 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:28.970 "assigned_rate_limits": { 00:21:28.970 "rw_ios_per_sec": 0, 00:21:28.970 "rw_mbytes_per_sec": 0, 00:21:28.970 "r_mbytes_per_sec": 0, 00:21:28.970 "w_mbytes_per_sec": 0 00:21:28.970 }, 00:21:28.970 "claimed": false, 00:21:28.970 "zoned": false, 00:21:28.970 "supported_io_types": { 00:21:28.970 "read": true, 00:21:28.970 "write": true, 00:21:28.970 "unmap": true, 00:21:28.970 "flush": true, 00:21:28.970 "reset": true, 00:21:28.970 "nvme_admin": false, 00:21:28.970 "nvme_io": false, 00:21:28.970 "nvme_io_md": false, 00:21:28.970 "write_zeroes": true, 00:21:28.970 "zcopy": true, 00:21:28.970 "get_zone_info": false, 00:21:28.970 "zone_management": false, 00:21:28.970 "zone_append": false, 00:21:28.970 "compare": false, 00:21:28.970 "compare_and_write": false, 00:21:28.970 "abort": true, 00:21:28.970 "seek_hole": false, 00:21:28.970 "seek_data": false, 00:21:28.970 "copy": true, 00:21:28.970 "nvme_iov_md": false 00:21:28.970 }, 00:21:28.970 "memory_domains": [ 00:21:28.970 { 00:21:28.970 "dma_device_id": "system", 00:21:28.970 "dma_device_type": 1 00:21:28.970 }, 00:21:28.970 { 00:21:28.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.970 "dma_device_type": 2 00:21:28.970 } 00:21:28.970 ], 00:21:28.970 "driver_specific": {} 00:21:28.970 } 00:21:28.970 ] 00:21:28.970 10:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:28.970 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:28.970 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:28.970 10:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:29.230 BaseBdev3 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:29.230 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.490 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:29.749 [ 00:21:29.749 { 00:21:29.749 "name": "BaseBdev3", 00:21:29.749 "aliases": [ 00:21:29.749 "c56d7174-31cb-4131-be1d-fdd7133d9cf2" 00:21:29.749 ], 00:21:29.749 "product_name": "Malloc disk", 00:21:29.749 "block_size": 512, 00:21:29.749 "num_blocks": 65536, 00:21:29.749 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:29.749 "assigned_rate_limits": { 00:21:29.749 "rw_ios_per_sec": 0, 00:21:29.749 "rw_mbytes_per_sec": 0, 00:21:29.749 "r_mbytes_per_sec": 0, 00:21:29.749 "w_mbytes_per_sec": 0 00:21:29.749 }, 00:21:29.749 "claimed": false, 00:21:29.749 "zoned": false, 00:21:29.749 "supported_io_types": { 00:21:29.749 "read": true, 00:21:29.749 "write": true, 00:21:29.749 "unmap": true, 00:21:29.749 "flush": true, 00:21:29.749 "reset": true, 00:21:29.749 "nvme_admin": false, 00:21:29.749 "nvme_io": false, 00:21:29.749 "nvme_io_md": false, 00:21:29.749 "write_zeroes": true, 00:21:29.749 "zcopy": true, 00:21:29.749 "get_zone_info": false, 00:21:29.749 "zone_management": false, 00:21:29.749 "zone_append": false, 00:21:29.749 "compare": false, 00:21:29.749 "compare_and_write": false, 00:21:29.749 "abort": true, 00:21:29.749 "seek_hole": false, 00:21:29.749 "seek_data": false, 00:21:29.749 "copy": true, 00:21:29.749 "nvme_iov_md": false 00:21:29.749 }, 00:21:29.750 "memory_domains": [ 00:21:29.750 { 00:21:29.750 "dma_device_id": "system", 00:21:29.750 "dma_device_type": 1 00:21:29.750 }, 00:21:29.750 { 00:21:29.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.750 "dma_device_type": 2 00:21:29.750 } 00:21:29.750 ], 00:21:29.750 "driver_specific": {} 00:21:29.750 } 00:21:29.750 ] 00:21:29.750 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:29.750 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:29.750 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:29.750 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:30.009 BaseBdev4 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:30.009 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:30.269 10:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:30.528 [ 00:21:30.528 { 00:21:30.528 "name": "BaseBdev4", 00:21:30.528 "aliases": [ 00:21:30.528 "8ff6810d-246b-42c3-a498-e00903cb32bc" 00:21:30.528 ], 00:21:30.528 "product_name": "Malloc disk", 00:21:30.528 "block_size": 512, 00:21:30.528 "num_blocks": 65536, 00:21:30.528 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:30.528 "assigned_rate_limits": { 00:21:30.528 "rw_ios_per_sec": 0, 00:21:30.528 "rw_mbytes_per_sec": 0, 00:21:30.528 "r_mbytes_per_sec": 0, 00:21:30.528 "w_mbytes_per_sec": 0 00:21:30.528 }, 00:21:30.528 "claimed": false, 00:21:30.528 "zoned": false, 00:21:30.528 "supported_io_types": { 00:21:30.528 "read": true, 00:21:30.528 "write": true, 00:21:30.528 "unmap": true, 00:21:30.528 "flush": true, 00:21:30.528 "reset": true, 00:21:30.528 "nvme_admin": false, 00:21:30.528 "nvme_io": false, 00:21:30.528 "nvme_io_md": false, 00:21:30.528 "write_zeroes": true, 00:21:30.528 "zcopy": true, 00:21:30.528 "get_zone_info": false, 00:21:30.528 "zone_management": false, 00:21:30.528 "zone_append": false, 00:21:30.528 "compare": false, 00:21:30.528 "compare_and_write": false, 00:21:30.528 "abort": true, 00:21:30.528 "seek_hole": false, 00:21:30.528 "seek_data": false, 00:21:30.528 "copy": true, 00:21:30.528 "nvme_iov_md": false 00:21:30.528 }, 00:21:30.528 "memory_domains": [ 00:21:30.528 { 00:21:30.528 "dma_device_id": "system", 00:21:30.528 "dma_device_type": 1 00:21:30.528 }, 00:21:30.528 { 00:21:30.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.528 "dma_device_type": 2 00:21:30.528 } 00:21:30.528 ], 00:21:30.528 "driver_specific": {} 00:21:30.528 } 00:21:30.528 ] 00:21:30.528 10:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:30.528 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:30.528 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:30.528 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:30.787 [2024-07-26 10:31:43.431061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.787 [2024-07-26 10:31:43.431103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.787 [2024-07-26 10:31:43.431122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.787 [2024-07-26 10:31:43.432354] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.787 [2024-07-26 10:31:43.432394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.787 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.046 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.046 "name": "Existed_Raid", 00:21:31.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.046 "strip_size_kb": 64, 00:21:31.046 "state": "configuring", 00:21:31.046 "raid_level": "concat", 00:21:31.046 "superblock": false, 00:21:31.046 "num_base_bdevs": 4, 00:21:31.046 "num_base_bdevs_discovered": 3, 00:21:31.046 "num_base_bdevs_operational": 4, 00:21:31.046 "base_bdevs_list": [ 00:21:31.046 { 00:21:31.046 "name": "BaseBdev1", 00:21:31.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.046 "is_configured": false, 00:21:31.046 "data_offset": 0, 00:21:31.046 "data_size": 0 00:21:31.046 }, 00:21:31.046 { 00:21:31.046 "name": "BaseBdev2", 00:21:31.046 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:31.046 "is_configured": true, 00:21:31.046 "data_offset": 0, 00:21:31.046 "data_size": 65536 00:21:31.046 }, 00:21:31.046 { 00:21:31.046 "name": "BaseBdev3", 00:21:31.046 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:31.046 "is_configured": true, 00:21:31.046 "data_offset": 0, 00:21:31.046 "data_size": 65536 00:21:31.046 }, 00:21:31.046 { 00:21:31.046 "name": "BaseBdev4", 00:21:31.047 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:31.047 "is_configured": true, 00:21:31.047 "data_offset": 0, 00:21:31.047 "data_size": 65536 00:21:31.047 } 00:21:31.047 ] 00:21:31.047 }' 00:21:31.047 10:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.047 10:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:31.615 [2024-07-26 10:31:44.481797] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.615 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.875 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.875 "name": "Existed_Raid", 00:21:31.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.875 "strip_size_kb": 64, 00:21:31.875 "state": "configuring", 00:21:31.875 "raid_level": "concat", 00:21:31.875 "superblock": false, 00:21:31.875 "num_base_bdevs": 4, 00:21:31.875 "num_base_bdevs_discovered": 2, 00:21:31.875 "num_base_bdevs_operational": 4, 00:21:31.875 "base_bdevs_list": [ 00:21:31.875 { 00:21:31.875 "name": "BaseBdev1", 00:21:31.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.875 "is_configured": false, 00:21:31.875 "data_offset": 0, 00:21:31.875 "data_size": 0 00:21:31.875 }, 00:21:31.875 { 00:21:31.875 "name": null, 00:21:31.875 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:31.875 "is_configured": false, 00:21:31.875 "data_offset": 0, 00:21:31.875 "data_size": 65536 00:21:31.875 }, 00:21:31.875 { 00:21:31.875 "name": "BaseBdev3", 00:21:31.875 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:31.875 "is_configured": true, 00:21:31.875 "data_offset": 0, 00:21:31.875 "data_size": 65536 00:21:31.875 }, 00:21:31.875 { 00:21:31.875 "name": "BaseBdev4", 00:21:31.875 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:31.875 "is_configured": true, 00:21:31.875 "data_offset": 0, 00:21:31.875 "data_size": 65536 00:21:31.875 } 00:21:31.875 ] 00:21:31.875 }' 00:21:31.875 10:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.875 10:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.444 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.444 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:32.703 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:32.703 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.966 [2024-07-26 10:31:45.748235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.966 BaseBdev1 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:32.966 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:33.226 10:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.486 [ 00:21:33.486 { 00:21:33.486 "name": "BaseBdev1", 00:21:33.486 "aliases": [ 00:21:33.486 "4dfe7729-9a55-4262-a3a6-9e2878c3efa2" 00:21:33.486 ], 00:21:33.486 "product_name": "Malloc disk", 00:21:33.486 "block_size": 512, 00:21:33.486 "num_blocks": 65536, 00:21:33.486 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:33.486 "assigned_rate_limits": { 00:21:33.486 "rw_ios_per_sec": 0, 00:21:33.486 "rw_mbytes_per_sec": 0, 00:21:33.486 "r_mbytes_per_sec": 0, 00:21:33.486 "w_mbytes_per_sec": 0 00:21:33.486 }, 00:21:33.486 "claimed": true, 00:21:33.486 "claim_type": "exclusive_write", 00:21:33.486 "zoned": false, 00:21:33.486 "supported_io_types": { 00:21:33.486 "read": true, 00:21:33.486 "write": true, 00:21:33.486 "unmap": true, 00:21:33.486 "flush": true, 00:21:33.486 "reset": true, 00:21:33.486 "nvme_admin": false, 00:21:33.486 "nvme_io": false, 00:21:33.486 "nvme_io_md": false, 00:21:33.486 "write_zeroes": true, 00:21:33.486 "zcopy": true, 00:21:33.486 "get_zone_info": false, 00:21:33.486 "zone_management": false, 00:21:33.486 "zone_append": false, 00:21:33.486 "compare": false, 00:21:33.486 "compare_and_write": false, 00:21:33.486 "abort": true, 00:21:33.486 "seek_hole": false, 00:21:33.486 "seek_data": false, 00:21:33.486 "copy": true, 00:21:33.486 "nvme_iov_md": false 00:21:33.486 }, 00:21:33.486 "memory_domains": [ 00:21:33.486 { 00:21:33.486 "dma_device_id": "system", 00:21:33.486 "dma_device_type": 1 00:21:33.486 }, 00:21:33.486 { 00:21:33.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.486 "dma_device_type": 2 00:21:33.486 } 00:21:33.486 ], 00:21:33.486 "driver_specific": {} 00:21:33.486 } 00:21:33.486 ] 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.486 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.746 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.746 "name": "Existed_Raid", 00:21:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.746 "strip_size_kb": 64, 00:21:33.746 "state": "configuring", 00:21:33.746 "raid_level": "concat", 00:21:33.746 "superblock": false, 00:21:33.746 "num_base_bdevs": 4, 00:21:33.746 "num_base_bdevs_discovered": 3, 00:21:33.746 "num_base_bdevs_operational": 4, 00:21:33.746 "base_bdevs_list": [ 00:21:33.746 { 00:21:33.746 "name": "BaseBdev1", 00:21:33.746 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:33.746 "is_configured": true, 00:21:33.746 "data_offset": 0, 00:21:33.746 "data_size": 65536 00:21:33.746 }, 00:21:33.746 { 00:21:33.746 "name": null, 00:21:33.746 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:33.746 "is_configured": false, 00:21:33.746 "data_offset": 0, 00:21:33.746 "data_size": 65536 00:21:33.746 }, 00:21:33.746 { 00:21:33.746 "name": "BaseBdev3", 00:21:33.746 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:33.746 "is_configured": true, 00:21:33.746 "data_offset": 0, 00:21:33.746 "data_size": 65536 00:21:33.746 }, 00:21:33.746 { 00:21:33.746 "name": "BaseBdev4", 00:21:33.746 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:33.746 "is_configured": true, 00:21:33.746 "data_offset": 0, 00:21:33.746 "data_size": 65536 00:21:33.746 } 00:21:33.746 ] 00:21:33.746 }' 00:21:33.746 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.746 10:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.315 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.315 10:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:34.575 [2024-07-26 10:31:47.432733] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.575 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.835 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.835 "name": "Existed_Raid", 00:21:34.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.835 "strip_size_kb": 64, 00:21:34.835 "state": "configuring", 00:21:34.835 "raid_level": "concat", 00:21:34.835 "superblock": false, 00:21:34.835 "num_base_bdevs": 4, 00:21:34.835 "num_base_bdevs_discovered": 2, 00:21:34.835 "num_base_bdevs_operational": 4, 00:21:34.835 "base_bdevs_list": [ 00:21:34.835 { 00:21:34.835 "name": "BaseBdev1", 00:21:34.835 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:34.835 "is_configured": true, 00:21:34.835 "data_offset": 0, 00:21:34.835 "data_size": 65536 00:21:34.835 }, 00:21:34.835 { 00:21:34.835 "name": null, 00:21:34.835 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:34.835 "is_configured": false, 00:21:34.835 "data_offset": 0, 00:21:34.835 "data_size": 65536 00:21:34.835 }, 00:21:34.835 { 00:21:34.835 "name": null, 00:21:34.835 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:34.835 "is_configured": false, 00:21:34.835 "data_offset": 0, 00:21:34.835 "data_size": 65536 00:21:34.835 }, 00:21:34.835 { 00:21:34.835 "name": "BaseBdev4", 00:21:34.835 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:34.835 "is_configured": true, 00:21:34.835 "data_offset": 0, 00:21:34.835 "data_size": 65536 00:21:34.835 } 00:21:34.835 ] 00:21:34.835 }' 00:21:34.835 10:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.835 10:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.404 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.404 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:35.662 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:35.662 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:35.920 [2024-07-26 10:31:48.692075] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.920 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.178 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.178 "name": "Existed_Raid", 00:21:36.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.178 "strip_size_kb": 64, 00:21:36.178 "state": "configuring", 00:21:36.178 "raid_level": "concat", 00:21:36.178 "superblock": false, 00:21:36.178 "num_base_bdevs": 4, 00:21:36.178 "num_base_bdevs_discovered": 3, 00:21:36.178 "num_base_bdevs_operational": 4, 00:21:36.178 "base_bdevs_list": [ 00:21:36.178 { 00:21:36.178 "name": "BaseBdev1", 00:21:36.178 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:36.178 "is_configured": true, 00:21:36.178 "data_offset": 0, 00:21:36.178 "data_size": 65536 00:21:36.178 }, 00:21:36.178 { 00:21:36.178 "name": null, 00:21:36.178 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:36.178 "is_configured": false, 00:21:36.178 "data_offset": 0, 00:21:36.178 "data_size": 65536 00:21:36.178 }, 00:21:36.178 { 00:21:36.178 "name": "BaseBdev3", 00:21:36.178 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:36.178 "is_configured": true, 00:21:36.178 "data_offset": 0, 00:21:36.178 "data_size": 65536 00:21:36.178 }, 00:21:36.178 { 00:21:36.178 "name": "BaseBdev4", 00:21:36.178 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:36.178 "is_configured": true, 00:21:36.178 "data_offset": 0, 00:21:36.178 "data_size": 65536 00:21:36.178 } 00:21:36.178 ] 00:21:36.178 }' 00:21:36.178 10:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.178 10:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.744 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.744 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:37.003 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:37.003 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:37.263 [2024-07-26 10:31:49.959494] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.263 10:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.522 10:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.522 "name": "Existed_Raid", 00:21:37.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.522 "strip_size_kb": 64, 00:21:37.522 "state": "configuring", 00:21:37.522 "raid_level": "concat", 00:21:37.522 "superblock": false, 00:21:37.522 "num_base_bdevs": 4, 00:21:37.522 "num_base_bdevs_discovered": 2, 00:21:37.522 "num_base_bdevs_operational": 4, 00:21:37.522 "base_bdevs_list": [ 00:21:37.522 { 00:21:37.522 "name": null, 00:21:37.522 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:37.522 "is_configured": false, 00:21:37.522 "data_offset": 0, 00:21:37.522 "data_size": 65536 00:21:37.522 }, 00:21:37.522 { 00:21:37.522 "name": null, 00:21:37.522 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:37.522 "is_configured": false, 00:21:37.522 "data_offset": 0, 00:21:37.522 "data_size": 65536 00:21:37.522 }, 00:21:37.522 { 00:21:37.522 "name": "BaseBdev3", 00:21:37.522 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:37.522 "is_configured": true, 00:21:37.522 "data_offset": 0, 00:21:37.522 "data_size": 65536 00:21:37.522 }, 00:21:37.522 { 00:21:37.522 "name": "BaseBdev4", 00:21:37.522 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:37.522 "is_configured": true, 00:21:37.522 "data_offset": 0, 00:21:37.522 "data_size": 65536 00:21:37.522 } 00:21:37.522 ] 00:21:37.522 }' 00:21:37.522 10:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.522 10:31:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.090 10:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.090 10:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:38.350 [2024-07-26 10:31:51.228989] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.350 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.609 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.609 "name": "Existed_Raid", 00:21:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.609 "strip_size_kb": 64, 00:21:38.609 "state": "configuring", 00:21:38.609 "raid_level": "concat", 00:21:38.609 "superblock": false, 00:21:38.609 "num_base_bdevs": 4, 00:21:38.609 "num_base_bdevs_discovered": 3, 00:21:38.609 "num_base_bdevs_operational": 4, 00:21:38.609 "base_bdevs_list": [ 00:21:38.609 { 00:21:38.609 "name": null, 00:21:38.609 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:38.609 "is_configured": false, 00:21:38.609 "data_offset": 0, 00:21:38.609 "data_size": 65536 00:21:38.609 }, 00:21:38.609 { 00:21:38.609 "name": "BaseBdev2", 00:21:38.609 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:38.609 "is_configured": true, 00:21:38.609 "data_offset": 0, 00:21:38.609 "data_size": 65536 00:21:38.609 }, 00:21:38.609 { 00:21:38.609 "name": "BaseBdev3", 00:21:38.609 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:38.609 "is_configured": true, 00:21:38.609 "data_offset": 0, 00:21:38.609 "data_size": 65536 00:21:38.609 }, 00:21:38.609 { 00:21:38.609 "name": "BaseBdev4", 00:21:38.609 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:38.609 "is_configured": true, 00:21:38.609 "data_offset": 0, 00:21:38.609 "data_size": 65536 00:21:38.609 } 00:21:38.609 ] 00:21:38.609 }' 00:21:38.609 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.609 10:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.177 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:39.177 10:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.435 10:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:39.436 10:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.436 10:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:39.694 10:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4dfe7729-9a55-4262-a3a6-9e2878c3efa2 00:21:39.954 [2024-07-26 10:31:52.663972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:39.954 [2024-07-26 10:31:52.664005] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x97df80 00:21:39.954 [2024-07-26 10:31:52.664013] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:39.954 [2024-07-26 10:31:52.664199] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x982b20 00:21:39.954 [2024-07-26 10:31:52.664305] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x97df80 00:21:39.954 [2024-07-26 10:31:52.664319] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x97df80 00:21:39.954 [2024-07-26 10:31:52.664472] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.954 NewBaseBdev 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:39.954 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.213 10:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:40.486 [ 00:21:40.486 { 00:21:40.486 "name": "NewBaseBdev", 00:21:40.486 "aliases": [ 00:21:40.486 "4dfe7729-9a55-4262-a3a6-9e2878c3efa2" 00:21:40.486 ], 00:21:40.486 "product_name": "Malloc disk", 00:21:40.486 "block_size": 512, 00:21:40.486 "num_blocks": 65536, 00:21:40.486 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:40.486 "assigned_rate_limits": { 00:21:40.486 "rw_ios_per_sec": 0, 00:21:40.486 "rw_mbytes_per_sec": 0, 00:21:40.486 "r_mbytes_per_sec": 0, 00:21:40.486 "w_mbytes_per_sec": 0 00:21:40.486 }, 00:21:40.486 "claimed": true, 00:21:40.486 "claim_type": "exclusive_write", 00:21:40.486 "zoned": false, 00:21:40.486 "supported_io_types": { 00:21:40.486 "read": true, 00:21:40.486 "write": true, 00:21:40.486 "unmap": true, 00:21:40.486 "flush": true, 00:21:40.486 "reset": true, 00:21:40.486 "nvme_admin": false, 00:21:40.486 "nvme_io": false, 00:21:40.486 "nvme_io_md": false, 00:21:40.486 "write_zeroes": true, 00:21:40.486 "zcopy": true, 00:21:40.486 "get_zone_info": false, 00:21:40.486 "zone_management": false, 00:21:40.486 "zone_append": false, 00:21:40.486 "compare": false, 00:21:40.486 "compare_and_write": false, 00:21:40.486 "abort": true, 00:21:40.486 "seek_hole": false, 00:21:40.486 "seek_data": false, 00:21:40.486 "copy": true, 00:21:40.486 "nvme_iov_md": false 00:21:40.486 }, 00:21:40.486 "memory_domains": [ 00:21:40.486 { 00:21:40.486 "dma_device_id": "system", 00:21:40.486 "dma_device_type": 1 00:21:40.486 }, 00:21:40.486 { 00:21:40.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.486 "dma_device_type": 2 00:21:40.486 } 00:21:40.486 ], 00:21:40.486 "driver_specific": {} 00:21:40.486 } 00:21:40.486 ] 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.486 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.486 "name": "Existed_Raid", 00:21:40.486 "uuid": "2d1a324d-6613-4bef-a3d5-b8c3e27af126", 00:21:40.486 "strip_size_kb": 64, 00:21:40.486 "state": "online", 00:21:40.486 "raid_level": "concat", 00:21:40.486 "superblock": false, 00:21:40.486 "num_base_bdevs": 4, 00:21:40.486 "num_base_bdevs_discovered": 4, 00:21:40.486 "num_base_bdevs_operational": 4, 00:21:40.486 "base_bdevs_list": [ 00:21:40.486 { 00:21:40.486 "name": "NewBaseBdev", 00:21:40.486 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:40.486 "is_configured": true, 00:21:40.486 "data_offset": 0, 00:21:40.486 "data_size": 65536 00:21:40.486 }, 00:21:40.486 { 00:21:40.486 "name": "BaseBdev2", 00:21:40.486 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:40.486 "is_configured": true, 00:21:40.486 "data_offset": 0, 00:21:40.486 "data_size": 65536 00:21:40.486 }, 00:21:40.486 { 00:21:40.486 "name": "BaseBdev3", 00:21:40.486 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:40.486 "is_configured": true, 00:21:40.486 "data_offset": 0, 00:21:40.486 "data_size": 65536 00:21:40.486 }, 00:21:40.486 { 00:21:40.487 "name": "BaseBdev4", 00:21:40.487 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:40.487 "is_configured": true, 00:21:40.487 "data_offset": 0, 00:21:40.487 "data_size": 65536 00:21:40.487 } 00:21:40.487 ] 00:21:40.487 }' 00:21:40.487 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.487 10:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:41.074 10:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:41.332 [2024-07-26 10:31:54.128186] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.332 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:41.332 "name": "Existed_Raid", 00:21:41.332 "aliases": [ 00:21:41.332 "2d1a324d-6613-4bef-a3d5-b8c3e27af126" 00:21:41.332 ], 00:21:41.332 "product_name": "Raid Volume", 00:21:41.332 "block_size": 512, 00:21:41.332 "num_blocks": 262144, 00:21:41.332 "uuid": "2d1a324d-6613-4bef-a3d5-b8c3e27af126", 00:21:41.332 "assigned_rate_limits": { 00:21:41.332 "rw_ios_per_sec": 0, 00:21:41.332 "rw_mbytes_per_sec": 0, 00:21:41.332 "r_mbytes_per_sec": 0, 00:21:41.332 "w_mbytes_per_sec": 0 00:21:41.332 }, 00:21:41.332 "claimed": false, 00:21:41.332 "zoned": false, 00:21:41.332 "supported_io_types": { 00:21:41.332 "read": true, 00:21:41.332 "write": true, 00:21:41.332 "unmap": true, 00:21:41.332 "flush": true, 00:21:41.332 "reset": true, 00:21:41.332 "nvme_admin": false, 00:21:41.332 "nvme_io": false, 00:21:41.332 "nvme_io_md": false, 00:21:41.332 "write_zeroes": true, 00:21:41.332 "zcopy": false, 00:21:41.332 "get_zone_info": false, 00:21:41.332 "zone_management": false, 00:21:41.332 "zone_append": false, 00:21:41.332 "compare": false, 00:21:41.332 "compare_and_write": false, 00:21:41.332 "abort": false, 00:21:41.332 "seek_hole": false, 00:21:41.332 "seek_data": false, 00:21:41.332 "copy": false, 00:21:41.332 "nvme_iov_md": false 00:21:41.332 }, 00:21:41.332 "memory_domains": [ 00:21:41.332 { 00:21:41.332 "dma_device_id": "system", 00:21:41.332 "dma_device_type": 1 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.332 "dma_device_type": 2 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "system", 00:21:41.332 "dma_device_type": 1 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.332 "dma_device_type": 2 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "system", 00:21:41.332 "dma_device_type": 1 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.332 "dma_device_type": 2 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "system", 00:21:41.332 "dma_device_type": 1 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.332 "dma_device_type": 2 00:21:41.332 } 00:21:41.332 ], 00:21:41.332 "driver_specific": { 00:21:41.332 "raid": { 00:21:41.332 "uuid": "2d1a324d-6613-4bef-a3d5-b8c3e27af126", 00:21:41.332 "strip_size_kb": 64, 00:21:41.332 "state": "online", 00:21:41.332 "raid_level": "concat", 00:21:41.332 "superblock": false, 00:21:41.332 "num_base_bdevs": 4, 00:21:41.332 "num_base_bdevs_discovered": 4, 00:21:41.332 "num_base_bdevs_operational": 4, 00:21:41.332 "base_bdevs_list": [ 00:21:41.332 { 00:21:41.332 "name": "NewBaseBdev", 00:21:41.332 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:41.332 "is_configured": true, 00:21:41.332 "data_offset": 0, 00:21:41.332 "data_size": 65536 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "name": "BaseBdev2", 00:21:41.332 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:41.332 "is_configured": true, 00:21:41.332 "data_offset": 0, 00:21:41.332 "data_size": 65536 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "name": "BaseBdev3", 00:21:41.332 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:41.332 "is_configured": true, 00:21:41.332 "data_offset": 0, 00:21:41.332 "data_size": 65536 00:21:41.332 }, 00:21:41.332 { 00:21:41.332 "name": "BaseBdev4", 00:21:41.332 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:41.332 "is_configured": true, 00:21:41.332 "data_offset": 0, 00:21:41.332 "data_size": 65536 00:21:41.332 } 00:21:41.332 ] 00:21:41.332 } 00:21:41.332 } 00:21:41.332 }' 00:21:41.332 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:41.332 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:41.332 BaseBdev2 00:21:41.332 BaseBdev3 00:21:41.332 BaseBdev4' 00:21:41.333 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.333 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:41.333 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.590 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.590 "name": "NewBaseBdev", 00:21:41.590 "aliases": [ 00:21:41.590 "4dfe7729-9a55-4262-a3a6-9e2878c3efa2" 00:21:41.590 ], 00:21:41.590 "product_name": "Malloc disk", 00:21:41.590 "block_size": 512, 00:21:41.590 "num_blocks": 65536, 00:21:41.590 "uuid": "4dfe7729-9a55-4262-a3a6-9e2878c3efa2", 00:21:41.590 "assigned_rate_limits": { 00:21:41.590 "rw_ios_per_sec": 0, 00:21:41.590 "rw_mbytes_per_sec": 0, 00:21:41.590 "r_mbytes_per_sec": 0, 00:21:41.590 "w_mbytes_per_sec": 0 00:21:41.590 }, 00:21:41.590 "claimed": true, 00:21:41.590 "claim_type": "exclusive_write", 00:21:41.590 "zoned": false, 00:21:41.590 "supported_io_types": { 00:21:41.590 "read": true, 00:21:41.590 "write": true, 00:21:41.590 "unmap": true, 00:21:41.590 "flush": true, 00:21:41.590 "reset": true, 00:21:41.590 "nvme_admin": false, 00:21:41.590 "nvme_io": false, 00:21:41.590 "nvme_io_md": false, 00:21:41.590 "write_zeroes": true, 00:21:41.590 "zcopy": true, 00:21:41.590 "get_zone_info": false, 00:21:41.590 "zone_management": false, 00:21:41.590 "zone_append": false, 00:21:41.590 "compare": false, 00:21:41.590 "compare_and_write": false, 00:21:41.590 "abort": true, 00:21:41.591 "seek_hole": false, 00:21:41.591 "seek_data": false, 00:21:41.591 "copy": true, 00:21:41.591 "nvme_iov_md": false 00:21:41.591 }, 00:21:41.591 "memory_domains": [ 00:21:41.591 { 00:21:41.591 "dma_device_id": "system", 00:21:41.591 "dma_device_type": 1 00:21:41.591 }, 00:21:41.591 { 00:21:41.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.591 "dma_device_type": 2 00:21:41.591 } 00:21:41.591 ], 00:21:41.591 "driver_specific": {} 00:21:41.591 }' 00:21:41.591 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.591 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.849 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.108 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.108 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.108 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:42.108 10:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.367 "name": "BaseBdev2", 00:21:42.367 "aliases": [ 00:21:42.367 "631e0dab-d1e2-40f0-a7e4-b51b01218095" 00:21:42.367 ], 00:21:42.367 "product_name": "Malloc disk", 00:21:42.367 "block_size": 512, 00:21:42.367 "num_blocks": 65536, 00:21:42.367 "uuid": "631e0dab-d1e2-40f0-a7e4-b51b01218095", 00:21:42.367 "assigned_rate_limits": { 00:21:42.367 "rw_ios_per_sec": 0, 00:21:42.367 "rw_mbytes_per_sec": 0, 00:21:42.367 "r_mbytes_per_sec": 0, 00:21:42.367 "w_mbytes_per_sec": 0 00:21:42.367 }, 00:21:42.367 "claimed": true, 00:21:42.367 "claim_type": "exclusive_write", 00:21:42.367 "zoned": false, 00:21:42.367 "supported_io_types": { 00:21:42.367 "read": true, 00:21:42.367 "write": true, 00:21:42.367 "unmap": true, 00:21:42.367 "flush": true, 00:21:42.367 "reset": true, 00:21:42.367 "nvme_admin": false, 00:21:42.367 "nvme_io": false, 00:21:42.367 "nvme_io_md": false, 00:21:42.367 "write_zeroes": true, 00:21:42.367 "zcopy": true, 00:21:42.367 "get_zone_info": false, 00:21:42.367 "zone_management": false, 00:21:42.367 "zone_append": false, 00:21:42.367 "compare": false, 00:21:42.367 "compare_and_write": false, 00:21:42.367 "abort": true, 00:21:42.367 "seek_hole": false, 00:21:42.367 "seek_data": false, 00:21:42.367 "copy": true, 00:21:42.367 "nvme_iov_md": false 00:21:42.367 }, 00:21:42.367 "memory_domains": [ 00:21:42.367 { 00:21:42.367 "dma_device_id": "system", 00:21:42.367 "dma_device_type": 1 00:21:42.367 }, 00:21:42.367 { 00:21:42.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.367 "dma_device_type": 2 00:21:42.367 } 00:21:42.367 ], 00:21:42.367 "driver_specific": {} 00:21:42.367 }' 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.367 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.626 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.626 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.626 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.626 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:42.626 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.194 "name": "BaseBdev3", 00:21:43.194 "aliases": [ 00:21:43.194 "c56d7174-31cb-4131-be1d-fdd7133d9cf2" 00:21:43.194 ], 00:21:43.194 "product_name": "Malloc disk", 00:21:43.194 "block_size": 512, 00:21:43.194 "num_blocks": 65536, 00:21:43.194 "uuid": "c56d7174-31cb-4131-be1d-fdd7133d9cf2", 00:21:43.194 "assigned_rate_limits": { 00:21:43.194 "rw_ios_per_sec": 0, 00:21:43.194 "rw_mbytes_per_sec": 0, 00:21:43.194 "r_mbytes_per_sec": 0, 00:21:43.194 "w_mbytes_per_sec": 0 00:21:43.194 }, 00:21:43.194 "claimed": true, 00:21:43.194 "claim_type": "exclusive_write", 00:21:43.194 "zoned": false, 00:21:43.194 "supported_io_types": { 00:21:43.194 "read": true, 00:21:43.194 "write": true, 00:21:43.194 "unmap": true, 00:21:43.194 "flush": true, 00:21:43.194 "reset": true, 00:21:43.194 "nvme_admin": false, 00:21:43.194 "nvme_io": false, 00:21:43.194 "nvme_io_md": false, 00:21:43.194 "write_zeroes": true, 00:21:43.194 "zcopy": true, 00:21:43.194 "get_zone_info": false, 00:21:43.194 "zone_management": false, 00:21:43.194 "zone_append": false, 00:21:43.194 "compare": false, 00:21:43.194 "compare_and_write": false, 00:21:43.194 "abort": true, 00:21:43.194 "seek_hole": false, 00:21:43.194 "seek_data": false, 00:21:43.194 "copy": true, 00:21:43.194 "nvme_iov_md": false 00:21:43.194 }, 00:21:43.194 "memory_domains": [ 00:21:43.194 { 00:21:43.194 "dma_device_id": "system", 00:21:43.194 "dma_device_type": 1 00:21:43.194 }, 00:21:43.194 { 00:21:43.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.194 "dma_device_type": 2 00:21:43.194 } 00:21:43.194 ], 00:21:43.194 "driver_specific": {} 00:21:43.194 }' 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.194 10:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.194 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.194 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.194 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:43.453 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.711 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.711 "name": "BaseBdev4", 00:21:43.711 "aliases": [ 00:21:43.711 "8ff6810d-246b-42c3-a498-e00903cb32bc" 00:21:43.711 ], 00:21:43.711 "product_name": "Malloc disk", 00:21:43.711 "block_size": 512, 00:21:43.711 "num_blocks": 65536, 00:21:43.711 "uuid": "8ff6810d-246b-42c3-a498-e00903cb32bc", 00:21:43.711 "assigned_rate_limits": { 00:21:43.711 "rw_ios_per_sec": 0, 00:21:43.711 "rw_mbytes_per_sec": 0, 00:21:43.711 "r_mbytes_per_sec": 0, 00:21:43.711 "w_mbytes_per_sec": 0 00:21:43.711 }, 00:21:43.711 "claimed": true, 00:21:43.711 "claim_type": "exclusive_write", 00:21:43.711 "zoned": false, 00:21:43.711 "supported_io_types": { 00:21:43.711 "read": true, 00:21:43.711 "write": true, 00:21:43.711 "unmap": true, 00:21:43.711 "flush": true, 00:21:43.711 "reset": true, 00:21:43.711 "nvme_admin": false, 00:21:43.711 "nvme_io": false, 00:21:43.711 "nvme_io_md": false, 00:21:43.711 "write_zeroes": true, 00:21:43.711 "zcopy": true, 00:21:43.711 "get_zone_info": false, 00:21:43.711 "zone_management": false, 00:21:43.711 "zone_append": false, 00:21:43.711 "compare": false, 00:21:43.711 "compare_and_write": false, 00:21:43.711 "abort": true, 00:21:43.711 "seek_hole": false, 00:21:43.711 "seek_data": false, 00:21:43.711 "copy": true, 00:21:43.711 "nvme_iov_md": false 00:21:43.711 }, 00:21:43.711 "memory_domains": [ 00:21:43.711 { 00:21:43.711 "dma_device_id": "system", 00:21:43.711 "dma_device_type": 1 00:21:43.711 }, 00:21:43.711 { 00:21:43.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.711 "dma_device_type": 2 00:21:43.711 } 00:21:43.711 ], 00:21:43.711 "driver_specific": {} 00:21:43.711 }' 00:21:43.711 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.712 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.970 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:44.230 [2024-07-26 10:31:56.927409] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.230 [2024-07-26 10:31:56.927434] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.230 [2024-07-26 10:31:56.927488] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.230 [2024-07-26 10:31:56.927544] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.230 [2024-07-26 10:31:56.927554] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x97df80 name Existed_Raid, state offline 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3435789 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3435789 ']' 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3435789 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3435789 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3435789' 00:21:44.230 killing process with pid 3435789 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3435789 00:21:44.230 [2024-07-26 10:31:56.999247] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.230 10:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3435789 00:21:44.230 [2024-07-26 10:31:57.030862] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.489 10:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:44.489 00:21:44.490 real 0m31.420s 00:21:44.490 user 0m57.666s 00:21:44.490 sys 0m5.715s 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.490 ************************************ 00:21:44.490 END TEST raid_state_function_test 00:21:44.490 ************************************ 00:21:44.490 10:31:57 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:44.490 10:31:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:44.490 10:31:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.490 10:31:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.490 ************************************ 00:21:44.490 START TEST raid_state_function_test_sb 00:21:44.490 ************************************ 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3441713 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3441713' 00:21:44.490 Process raid pid: 3441713 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3441713 /var/tmp/spdk-raid.sock 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3441713 ']' 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:44.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.490 10:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.490 [2024-07-26 10:31:57.352593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:21:44.490 [2024-07-26 10:31:57.352647] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:44.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:44.750 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:44.750 [2024-07-26 10:31:57.489586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.750 [2024-07-26 10:31:57.534092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.750 [2024-07-26 10:31:57.598181] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.750 [2024-07-26 10:31:57.598212] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:45.687 [2024-07-26 10:31:58.446945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.687 [2024-07-26 10:31:58.446982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.687 [2024-07-26 10:31:58.446992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.687 [2024-07-26 10:31:58.447007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.687 [2024-07-26 10:31:58.447015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.687 [2024-07-26 10:31:58.447025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.687 [2024-07-26 10:31:58.447033] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:45.687 [2024-07-26 10:31:58.447042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.687 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.947 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.947 "name": "Existed_Raid", 00:21:45.947 "uuid": "2dc19bdb-7e88-4d21-99f7-b6c7fd146648", 00:21:45.947 "strip_size_kb": 64, 00:21:45.947 "state": "configuring", 00:21:45.947 "raid_level": "concat", 00:21:45.947 "superblock": true, 00:21:45.947 "num_base_bdevs": 4, 00:21:45.947 "num_base_bdevs_discovered": 0, 00:21:45.947 "num_base_bdevs_operational": 4, 00:21:45.947 "base_bdevs_list": [ 00:21:45.947 { 00:21:45.947 "name": "BaseBdev1", 00:21:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.947 "is_configured": false, 00:21:45.947 "data_offset": 0, 00:21:45.947 "data_size": 0 00:21:45.947 }, 00:21:45.947 { 00:21:45.947 "name": "BaseBdev2", 00:21:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.947 "is_configured": false, 00:21:45.947 "data_offset": 0, 00:21:45.947 "data_size": 0 00:21:45.947 }, 00:21:45.947 { 00:21:45.947 "name": "BaseBdev3", 00:21:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.947 "is_configured": false, 00:21:45.947 "data_offset": 0, 00:21:45.947 "data_size": 0 00:21:45.947 }, 00:21:45.947 { 00:21:45.947 "name": "BaseBdev4", 00:21:45.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.947 "is_configured": false, 00:21:45.947 "data_offset": 0, 00:21:45.947 "data_size": 0 00:21:45.947 } 00:21:45.947 ] 00:21:45.947 }' 00:21:45.947 10:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.947 10:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.515 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:46.775 [2024-07-26 10:31:59.445426] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.775 [2024-07-26 10:31:59.445457] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5bb70 name Existed_Raid, state configuring 00:21:46.775 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:46.775 [2024-07-26 10:31:59.670058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:46.775 [2024-07-26 10:31:59.670090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:46.775 [2024-07-26 10:31:59.670099] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.775 [2024-07-26 10:31:59.670109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.775 [2024-07-26 10:31:59.670117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.775 [2024-07-26 10:31:59.670127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.775 [2024-07-26 10:31:59.670135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:46.775 [2024-07-26 10:31:59.670150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:47.034 [2024-07-26 10:31:59.903891] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:47.034 BaseBdev1 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:47.034 10:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.293 10:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:47.553 [ 00:21:47.553 { 00:21:47.553 "name": "BaseBdev1", 00:21:47.553 "aliases": [ 00:21:47.553 "e6edaed5-18cf-4615-bdc4-862627ce151b" 00:21:47.553 ], 00:21:47.553 "product_name": "Malloc disk", 00:21:47.553 "block_size": 512, 00:21:47.553 "num_blocks": 65536, 00:21:47.553 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:47.553 "assigned_rate_limits": { 00:21:47.553 "rw_ios_per_sec": 0, 00:21:47.553 "rw_mbytes_per_sec": 0, 00:21:47.553 "r_mbytes_per_sec": 0, 00:21:47.553 "w_mbytes_per_sec": 0 00:21:47.553 }, 00:21:47.553 "claimed": true, 00:21:47.553 "claim_type": "exclusive_write", 00:21:47.553 "zoned": false, 00:21:47.553 "supported_io_types": { 00:21:47.553 "read": true, 00:21:47.553 "write": true, 00:21:47.553 "unmap": true, 00:21:47.553 "flush": true, 00:21:47.553 "reset": true, 00:21:47.553 "nvme_admin": false, 00:21:47.553 "nvme_io": false, 00:21:47.553 "nvme_io_md": false, 00:21:47.553 "write_zeroes": true, 00:21:47.553 "zcopy": true, 00:21:47.553 "get_zone_info": false, 00:21:47.553 "zone_management": false, 00:21:47.553 "zone_append": false, 00:21:47.553 "compare": false, 00:21:47.553 "compare_and_write": false, 00:21:47.553 "abort": true, 00:21:47.553 "seek_hole": false, 00:21:47.553 "seek_data": false, 00:21:47.553 "copy": true, 00:21:47.553 "nvme_iov_md": false 00:21:47.553 }, 00:21:47.553 "memory_domains": [ 00:21:47.553 { 00:21:47.553 "dma_device_id": "system", 00:21:47.553 "dma_device_type": 1 00:21:47.553 }, 00:21:47.553 { 00:21:47.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.553 "dma_device_type": 2 00:21:47.553 } 00:21:47.553 ], 00:21:47.553 "driver_specific": {} 00:21:47.553 } 00:21:47.553 ] 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.553 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.813 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.813 "name": "Existed_Raid", 00:21:47.813 "uuid": "743fac35-56cd-4d73-82e8-f710177d0a39", 00:21:47.813 "strip_size_kb": 64, 00:21:47.813 "state": "configuring", 00:21:47.813 "raid_level": "concat", 00:21:47.813 "superblock": true, 00:21:47.813 "num_base_bdevs": 4, 00:21:47.813 "num_base_bdevs_discovered": 1, 00:21:47.813 "num_base_bdevs_operational": 4, 00:21:47.813 "base_bdevs_list": [ 00:21:47.813 { 00:21:47.813 "name": "BaseBdev1", 00:21:47.813 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:47.813 "is_configured": true, 00:21:47.813 "data_offset": 2048, 00:21:47.813 "data_size": 63488 00:21:47.813 }, 00:21:47.813 { 00:21:47.813 "name": "BaseBdev2", 00:21:47.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.813 "is_configured": false, 00:21:47.813 "data_offset": 0, 00:21:47.813 "data_size": 0 00:21:47.813 }, 00:21:47.813 { 00:21:47.813 "name": "BaseBdev3", 00:21:47.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.813 "is_configured": false, 00:21:47.813 "data_offset": 0, 00:21:47.813 "data_size": 0 00:21:47.813 }, 00:21:47.813 { 00:21:47.813 "name": "BaseBdev4", 00:21:47.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.813 "is_configured": false, 00:21:47.813 "data_offset": 0, 00:21:47.813 "data_size": 0 00:21:47.813 } 00:21:47.813 ] 00:21:47.813 }' 00:21:47.813 10:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.813 10:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.381 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.640 [2024-07-26 10:32:01.411874] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.640 [2024-07-26 10:32:01.411909] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5b4a0 name Existed_Raid, state configuring 00:21:48.640 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:48.899 [2024-07-26 10:32:01.640515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.899 [2024-07-26 10:32:01.641875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.899 [2024-07-26 10:32:01.641906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.899 [2024-07-26 10:32:01.641915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:48.899 [2024-07-26 10:32:01.641926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:48.899 [2024-07-26 10:32:01.641934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:48.899 [2024-07-26 10:32:01.641944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.899 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.158 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.158 "name": "Existed_Raid", 00:21:49.158 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:49.158 "strip_size_kb": 64, 00:21:49.158 "state": "configuring", 00:21:49.158 "raid_level": "concat", 00:21:49.158 "superblock": true, 00:21:49.158 "num_base_bdevs": 4, 00:21:49.158 "num_base_bdevs_discovered": 1, 00:21:49.158 "num_base_bdevs_operational": 4, 00:21:49.158 "base_bdevs_list": [ 00:21:49.158 { 00:21:49.158 "name": "BaseBdev1", 00:21:49.158 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:49.158 "is_configured": true, 00:21:49.158 "data_offset": 2048, 00:21:49.158 "data_size": 63488 00:21:49.158 }, 00:21:49.158 { 00:21:49.158 "name": "BaseBdev2", 00:21:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.158 "is_configured": false, 00:21:49.158 "data_offset": 0, 00:21:49.158 "data_size": 0 00:21:49.158 }, 00:21:49.158 { 00:21:49.158 "name": "BaseBdev3", 00:21:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.158 "is_configured": false, 00:21:49.158 "data_offset": 0, 00:21:49.158 "data_size": 0 00:21:49.158 }, 00:21:49.158 { 00:21:49.158 "name": "BaseBdev4", 00:21:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.158 "is_configured": false, 00:21:49.158 "data_offset": 0, 00:21:49.158 "data_size": 0 00:21:49.158 } 00:21:49.158 ] 00:21:49.158 }' 00:21:49.158 10:32:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.158 10:32:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.726 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:49.984 [2024-07-26 10:32:02.654211] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.984 BaseBdev2 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:49.984 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.243 10:32:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:50.243 [ 00:21:50.243 { 00:21:50.243 "name": "BaseBdev2", 00:21:50.243 "aliases": [ 00:21:50.243 "535685db-93f6-4357-9efe-c8aed1bb77d5" 00:21:50.243 ], 00:21:50.243 "product_name": "Malloc disk", 00:21:50.243 "block_size": 512, 00:21:50.243 "num_blocks": 65536, 00:21:50.243 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:50.243 "assigned_rate_limits": { 00:21:50.243 "rw_ios_per_sec": 0, 00:21:50.243 "rw_mbytes_per_sec": 0, 00:21:50.243 "r_mbytes_per_sec": 0, 00:21:50.243 "w_mbytes_per_sec": 0 00:21:50.243 }, 00:21:50.243 "claimed": true, 00:21:50.243 "claim_type": "exclusive_write", 00:21:50.243 "zoned": false, 00:21:50.243 "supported_io_types": { 00:21:50.243 "read": true, 00:21:50.243 "write": true, 00:21:50.243 "unmap": true, 00:21:50.243 "flush": true, 00:21:50.243 "reset": true, 00:21:50.243 "nvme_admin": false, 00:21:50.243 "nvme_io": false, 00:21:50.243 "nvme_io_md": false, 00:21:50.243 "write_zeroes": true, 00:21:50.243 "zcopy": true, 00:21:50.243 "get_zone_info": false, 00:21:50.243 "zone_management": false, 00:21:50.243 "zone_append": false, 00:21:50.243 "compare": false, 00:21:50.243 "compare_and_write": false, 00:21:50.243 "abort": true, 00:21:50.243 "seek_hole": false, 00:21:50.243 "seek_data": false, 00:21:50.243 "copy": true, 00:21:50.243 "nvme_iov_md": false 00:21:50.243 }, 00:21:50.243 "memory_domains": [ 00:21:50.243 { 00:21:50.243 "dma_device_id": "system", 00:21:50.243 "dma_device_type": 1 00:21:50.243 }, 00:21:50.243 { 00:21:50.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.243 "dma_device_type": 2 00:21:50.243 } 00:21:50.243 ], 00:21:50.243 "driver_specific": {} 00:21:50.243 } 00:21:50.243 ] 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.243 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.502 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.502 "name": "Existed_Raid", 00:21:50.502 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:50.502 "strip_size_kb": 64, 00:21:50.502 "state": "configuring", 00:21:50.502 "raid_level": "concat", 00:21:50.502 "superblock": true, 00:21:50.502 "num_base_bdevs": 4, 00:21:50.502 "num_base_bdevs_discovered": 2, 00:21:50.502 "num_base_bdevs_operational": 4, 00:21:50.502 "base_bdevs_list": [ 00:21:50.502 { 00:21:50.502 "name": "BaseBdev1", 00:21:50.502 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:50.502 "is_configured": true, 00:21:50.502 "data_offset": 2048, 00:21:50.502 "data_size": 63488 00:21:50.502 }, 00:21:50.502 { 00:21:50.502 "name": "BaseBdev2", 00:21:50.502 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:50.502 "is_configured": true, 00:21:50.502 "data_offset": 2048, 00:21:50.502 "data_size": 63488 00:21:50.502 }, 00:21:50.502 { 00:21:50.502 "name": "BaseBdev3", 00:21:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.502 "is_configured": false, 00:21:50.502 "data_offset": 0, 00:21:50.502 "data_size": 0 00:21:50.502 }, 00:21:50.502 { 00:21:50.502 "name": "BaseBdev4", 00:21:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.502 "is_configured": false, 00:21:50.502 "data_offset": 0, 00:21:50.502 "data_size": 0 00:21:50.502 } 00:21:50.502 ] 00:21:50.502 }' 00:21:50.502 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.502 10:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.068 10:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:51.326 [2024-07-26 10:32:04.153255] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:51.326 BaseBdev3 00:21:51.326 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:51.326 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:51.326 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:51.326 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:51.327 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:51.327 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:51.327 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:51.585 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:51.845 [ 00:21:51.845 { 00:21:51.845 "name": "BaseBdev3", 00:21:51.845 "aliases": [ 00:21:51.845 "3ca57f73-e648-4191-a97c-0cada76520f8" 00:21:51.845 ], 00:21:51.845 "product_name": "Malloc disk", 00:21:51.845 "block_size": 512, 00:21:51.845 "num_blocks": 65536, 00:21:51.845 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:51.845 "assigned_rate_limits": { 00:21:51.845 "rw_ios_per_sec": 0, 00:21:51.845 "rw_mbytes_per_sec": 0, 00:21:51.845 "r_mbytes_per_sec": 0, 00:21:51.845 "w_mbytes_per_sec": 0 00:21:51.845 }, 00:21:51.845 "claimed": true, 00:21:51.845 "claim_type": "exclusive_write", 00:21:51.845 "zoned": false, 00:21:51.845 "supported_io_types": { 00:21:51.845 "read": true, 00:21:51.845 "write": true, 00:21:51.845 "unmap": true, 00:21:51.845 "flush": true, 00:21:51.845 "reset": true, 00:21:51.845 "nvme_admin": false, 00:21:51.845 "nvme_io": false, 00:21:51.845 "nvme_io_md": false, 00:21:51.845 "write_zeroes": true, 00:21:51.845 "zcopy": true, 00:21:51.845 "get_zone_info": false, 00:21:51.845 "zone_management": false, 00:21:51.845 "zone_append": false, 00:21:51.845 "compare": false, 00:21:51.845 "compare_and_write": false, 00:21:51.845 "abort": true, 00:21:51.845 "seek_hole": false, 00:21:51.845 "seek_data": false, 00:21:51.845 "copy": true, 00:21:51.845 "nvme_iov_md": false 00:21:51.845 }, 00:21:51.845 "memory_domains": [ 00:21:51.845 { 00:21:51.845 "dma_device_id": "system", 00:21:51.845 "dma_device_type": 1 00:21:51.845 }, 00:21:51.845 { 00:21:51.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.845 "dma_device_type": 2 00:21:51.845 } 00:21:51.845 ], 00:21:51.845 "driver_specific": {} 00:21:51.845 } 00:21:51.845 ] 00:21:51.845 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.846 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.105 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.105 "name": "Existed_Raid", 00:21:52.105 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:52.105 "strip_size_kb": 64, 00:21:52.105 "state": "configuring", 00:21:52.105 "raid_level": "concat", 00:21:52.105 "superblock": true, 00:21:52.105 "num_base_bdevs": 4, 00:21:52.105 "num_base_bdevs_discovered": 3, 00:21:52.105 "num_base_bdevs_operational": 4, 00:21:52.105 "base_bdevs_list": [ 00:21:52.105 { 00:21:52.105 "name": "BaseBdev1", 00:21:52.105 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:52.105 "is_configured": true, 00:21:52.105 "data_offset": 2048, 00:21:52.105 "data_size": 63488 00:21:52.105 }, 00:21:52.105 { 00:21:52.105 "name": "BaseBdev2", 00:21:52.105 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:52.105 "is_configured": true, 00:21:52.105 "data_offset": 2048, 00:21:52.105 "data_size": 63488 00:21:52.105 }, 00:21:52.105 { 00:21:52.105 "name": "BaseBdev3", 00:21:52.105 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:52.105 "is_configured": true, 00:21:52.105 "data_offset": 2048, 00:21:52.105 "data_size": 63488 00:21:52.105 }, 00:21:52.105 { 00:21:52.105 "name": "BaseBdev4", 00:21:52.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.105 "is_configured": false, 00:21:52.105 "data_offset": 0, 00:21:52.105 "data_size": 0 00:21:52.105 } 00:21:52.105 ] 00:21:52.105 }' 00:21:52.105 10:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.105 10:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.707 10:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:52.707 [2024-07-26 10:32:05.600305] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:52.707 [2024-07-26 10:32:05.600456] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x210e3e0 00:21:52.707 [2024-07-26 10:32:05.600469] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:52.707 [2024-07-26 10:32:05.600631] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f481d0 00:21:52.707 [2024-07-26 10:32:05.600737] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x210e3e0 00:21:52.707 [2024-07-26 10:32:05.600746] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x210e3e0 00:21:52.707 [2024-07-26 10:32:05.600830] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.707 BaseBdev4 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.966 10:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:53.225 [ 00:21:53.225 { 00:21:53.225 "name": "BaseBdev4", 00:21:53.225 "aliases": [ 00:21:53.225 "aba0d73d-5fe1-42c5-a046-bb5ca831414e" 00:21:53.225 ], 00:21:53.225 "product_name": "Malloc disk", 00:21:53.225 "block_size": 512, 00:21:53.225 "num_blocks": 65536, 00:21:53.225 "uuid": "aba0d73d-5fe1-42c5-a046-bb5ca831414e", 00:21:53.225 "assigned_rate_limits": { 00:21:53.225 "rw_ios_per_sec": 0, 00:21:53.225 "rw_mbytes_per_sec": 0, 00:21:53.225 "r_mbytes_per_sec": 0, 00:21:53.225 "w_mbytes_per_sec": 0 00:21:53.225 }, 00:21:53.225 "claimed": true, 00:21:53.225 "claim_type": "exclusive_write", 00:21:53.225 "zoned": false, 00:21:53.225 "supported_io_types": { 00:21:53.225 "read": true, 00:21:53.225 "write": true, 00:21:53.225 "unmap": true, 00:21:53.225 "flush": true, 00:21:53.225 "reset": true, 00:21:53.225 "nvme_admin": false, 00:21:53.225 "nvme_io": false, 00:21:53.225 "nvme_io_md": false, 00:21:53.225 "write_zeroes": true, 00:21:53.225 "zcopy": true, 00:21:53.225 "get_zone_info": false, 00:21:53.225 "zone_management": false, 00:21:53.225 "zone_append": false, 00:21:53.225 "compare": false, 00:21:53.225 "compare_and_write": false, 00:21:53.225 "abort": true, 00:21:53.225 "seek_hole": false, 00:21:53.225 "seek_data": false, 00:21:53.225 "copy": true, 00:21:53.225 "nvme_iov_md": false 00:21:53.225 }, 00:21:53.225 "memory_domains": [ 00:21:53.225 { 00:21:53.225 "dma_device_id": "system", 00:21:53.225 "dma_device_type": 1 00:21:53.225 }, 00:21:53.225 { 00:21:53.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.225 "dma_device_type": 2 00:21:53.225 } 00:21:53.225 ], 00:21:53.225 "driver_specific": {} 00:21:53.225 } 00:21:53.225 ] 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.225 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.485 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.485 "name": "Existed_Raid", 00:21:53.485 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:53.485 "strip_size_kb": 64, 00:21:53.485 "state": "online", 00:21:53.485 "raid_level": "concat", 00:21:53.485 "superblock": true, 00:21:53.485 "num_base_bdevs": 4, 00:21:53.485 "num_base_bdevs_discovered": 4, 00:21:53.485 "num_base_bdevs_operational": 4, 00:21:53.485 "base_bdevs_list": [ 00:21:53.485 { 00:21:53.485 "name": "BaseBdev1", 00:21:53.485 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:53.485 "is_configured": true, 00:21:53.485 "data_offset": 2048, 00:21:53.485 "data_size": 63488 00:21:53.485 }, 00:21:53.485 { 00:21:53.485 "name": "BaseBdev2", 00:21:53.485 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:53.485 "is_configured": true, 00:21:53.485 "data_offset": 2048, 00:21:53.485 "data_size": 63488 00:21:53.485 }, 00:21:53.485 { 00:21:53.485 "name": "BaseBdev3", 00:21:53.485 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:53.485 "is_configured": true, 00:21:53.485 "data_offset": 2048, 00:21:53.485 "data_size": 63488 00:21:53.485 }, 00:21:53.485 { 00:21:53.485 "name": "BaseBdev4", 00:21:53.485 "uuid": "aba0d73d-5fe1-42c5-a046-bb5ca831414e", 00:21:53.485 "is_configured": true, 00:21:53.485 "data_offset": 2048, 00:21:53.485 "data_size": 63488 00:21:53.485 } 00:21:53.485 ] 00:21:53.485 }' 00:21:53.485 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.485 10:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:54.052 10:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:54.312 [2024-07-26 10:32:07.040559] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:54.312 "name": "Existed_Raid", 00:21:54.312 "aliases": [ 00:21:54.312 "ea829748-016f-4ae3-96be-c6bb2d4ff8fe" 00:21:54.312 ], 00:21:54.312 "product_name": "Raid Volume", 00:21:54.312 "block_size": 512, 00:21:54.312 "num_blocks": 253952, 00:21:54.312 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:54.312 "assigned_rate_limits": { 00:21:54.312 "rw_ios_per_sec": 0, 00:21:54.312 "rw_mbytes_per_sec": 0, 00:21:54.312 "r_mbytes_per_sec": 0, 00:21:54.312 "w_mbytes_per_sec": 0 00:21:54.312 }, 00:21:54.312 "claimed": false, 00:21:54.312 "zoned": false, 00:21:54.312 "supported_io_types": { 00:21:54.312 "read": true, 00:21:54.312 "write": true, 00:21:54.312 "unmap": true, 00:21:54.312 "flush": true, 00:21:54.312 "reset": true, 00:21:54.312 "nvme_admin": false, 00:21:54.312 "nvme_io": false, 00:21:54.312 "nvme_io_md": false, 00:21:54.312 "write_zeroes": true, 00:21:54.312 "zcopy": false, 00:21:54.312 "get_zone_info": false, 00:21:54.312 "zone_management": false, 00:21:54.312 "zone_append": false, 00:21:54.312 "compare": false, 00:21:54.312 "compare_and_write": false, 00:21:54.312 "abort": false, 00:21:54.312 "seek_hole": false, 00:21:54.312 "seek_data": false, 00:21:54.312 "copy": false, 00:21:54.312 "nvme_iov_md": false 00:21:54.312 }, 00:21:54.312 "memory_domains": [ 00:21:54.312 { 00:21:54.312 "dma_device_id": "system", 00:21:54.312 "dma_device_type": 1 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.312 "dma_device_type": 2 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "system", 00:21:54.312 "dma_device_type": 1 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.312 "dma_device_type": 2 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "system", 00:21:54.312 "dma_device_type": 1 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.312 "dma_device_type": 2 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "system", 00:21:54.312 "dma_device_type": 1 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.312 "dma_device_type": 2 00:21:54.312 } 00:21:54.312 ], 00:21:54.312 "driver_specific": { 00:21:54.312 "raid": { 00:21:54.312 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:54.312 "strip_size_kb": 64, 00:21:54.312 "state": "online", 00:21:54.312 "raid_level": "concat", 00:21:54.312 "superblock": true, 00:21:54.312 "num_base_bdevs": 4, 00:21:54.312 "num_base_bdevs_discovered": 4, 00:21:54.312 "num_base_bdevs_operational": 4, 00:21:54.312 "base_bdevs_list": [ 00:21:54.312 { 00:21:54.312 "name": "BaseBdev1", 00:21:54.312 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:54.312 "is_configured": true, 00:21:54.312 "data_offset": 2048, 00:21:54.312 "data_size": 63488 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "name": "BaseBdev2", 00:21:54.312 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:54.312 "is_configured": true, 00:21:54.312 "data_offset": 2048, 00:21:54.312 "data_size": 63488 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "name": "BaseBdev3", 00:21:54.312 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:54.312 "is_configured": true, 00:21:54.312 "data_offset": 2048, 00:21:54.312 "data_size": 63488 00:21:54.312 }, 00:21:54.312 { 00:21:54.312 "name": "BaseBdev4", 00:21:54.312 "uuid": "aba0d73d-5fe1-42c5-a046-bb5ca831414e", 00:21:54.312 "is_configured": true, 00:21:54.312 "data_offset": 2048, 00:21:54.312 "data_size": 63488 00:21:54.312 } 00:21:54.312 ] 00:21:54.312 } 00:21:54.312 } 00:21:54.312 }' 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:54.312 BaseBdev2 00:21:54.312 BaseBdev3 00:21:54.312 BaseBdev4' 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:54.312 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.572 "name": "BaseBdev1", 00:21:54.572 "aliases": [ 00:21:54.572 "e6edaed5-18cf-4615-bdc4-862627ce151b" 00:21:54.572 ], 00:21:54.572 "product_name": "Malloc disk", 00:21:54.572 "block_size": 512, 00:21:54.572 "num_blocks": 65536, 00:21:54.572 "uuid": "e6edaed5-18cf-4615-bdc4-862627ce151b", 00:21:54.572 "assigned_rate_limits": { 00:21:54.572 "rw_ios_per_sec": 0, 00:21:54.572 "rw_mbytes_per_sec": 0, 00:21:54.572 "r_mbytes_per_sec": 0, 00:21:54.572 "w_mbytes_per_sec": 0 00:21:54.572 }, 00:21:54.572 "claimed": true, 00:21:54.572 "claim_type": "exclusive_write", 00:21:54.572 "zoned": false, 00:21:54.572 "supported_io_types": { 00:21:54.572 "read": true, 00:21:54.572 "write": true, 00:21:54.572 "unmap": true, 00:21:54.572 "flush": true, 00:21:54.572 "reset": true, 00:21:54.572 "nvme_admin": false, 00:21:54.572 "nvme_io": false, 00:21:54.572 "nvme_io_md": false, 00:21:54.572 "write_zeroes": true, 00:21:54.572 "zcopy": true, 00:21:54.572 "get_zone_info": false, 00:21:54.572 "zone_management": false, 00:21:54.572 "zone_append": false, 00:21:54.572 "compare": false, 00:21:54.572 "compare_and_write": false, 00:21:54.572 "abort": true, 00:21:54.572 "seek_hole": false, 00:21:54.572 "seek_data": false, 00:21:54.572 "copy": true, 00:21:54.572 "nvme_iov_md": false 00:21:54.572 }, 00:21:54.572 "memory_domains": [ 00:21:54.572 { 00:21:54.572 "dma_device_id": "system", 00:21:54.572 "dma_device_type": 1 00:21:54.572 }, 00:21:54.572 { 00:21:54.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.572 "dma_device_type": 2 00:21:54.572 } 00:21:54.572 ], 00:21:54.572 "driver_specific": {} 00:21:54.572 }' 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.572 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:54.831 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.090 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.090 "name": "BaseBdev2", 00:21:55.090 "aliases": [ 00:21:55.090 "535685db-93f6-4357-9efe-c8aed1bb77d5" 00:21:55.090 ], 00:21:55.090 "product_name": "Malloc disk", 00:21:55.090 "block_size": 512, 00:21:55.090 "num_blocks": 65536, 00:21:55.090 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:55.090 "assigned_rate_limits": { 00:21:55.090 "rw_ios_per_sec": 0, 00:21:55.090 "rw_mbytes_per_sec": 0, 00:21:55.090 "r_mbytes_per_sec": 0, 00:21:55.090 "w_mbytes_per_sec": 0 00:21:55.090 }, 00:21:55.090 "claimed": true, 00:21:55.090 "claim_type": "exclusive_write", 00:21:55.090 "zoned": false, 00:21:55.090 "supported_io_types": { 00:21:55.090 "read": true, 00:21:55.090 "write": true, 00:21:55.090 "unmap": true, 00:21:55.090 "flush": true, 00:21:55.090 "reset": true, 00:21:55.090 "nvme_admin": false, 00:21:55.090 "nvme_io": false, 00:21:55.090 "nvme_io_md": false, 00:21:55.090 "write_zeroes": true, 00:21:55.090 "zcopy": true, 00:21:55.090 "get_zone_info": false, 00:21:55.090 "zone_management": false, 00:21:55.090 "zone_append": false, 00:21:55.090 "compare": false, 00:21:55.090 "compare_and_write": false, 00:21:55.090 "abort": true, 00:21:55.090 "seek_hole": false, 00:21:55.090 "seek_data": false, 00:21:55.090 "copy": true, 00:21:55.090 "nvme_iov_md": false 00:21:55.090 }, 00:21:55.090 "memory_domains": [ 00:21:55.090 { 00:21:55.090 "dma_device_id": "system", 00:21:55.090 "dma_device_type": 1 00:21:55.090 }, 00:21:55.090 { 00:21:55.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.090 "dma_device_type": 2 00:21:55.090 } 00:21:55.090 ], 00:21:55.090 "driver_specific": {} 00:21:55.090 }' 00:21:55.090 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.090 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.090 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.090 10:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:55.349 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.608 "name": "BaseBdev3", 00:21:55.608 "aliases": [ 00:21:55.608 "3ca57f73-e648-4191-a97c-0cada76520f8" 00:21:55.608 ], 00:21:55.608 "product_name": "Malloc disk", 00:21:55.608 "block_size": 512, 00:21:55.608 "num_blocks": 65536, 00:21:55.608 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:55.608 "assigned_rate_limits": { 00:21:55.608 "rw_ios_per_sec": 0, 00:21:55.608 "rw_mbytes_per_sec": 0, 00:21:55.608 "r_mbytes_per_sec": 0, 00:21:55.608 "w_mbytes_per_sec": 0 00:21:55.608 }, 00:21:55.608 "claimed": true, 00:21:55.608 "claim_type": "exclusive_write", 00:21:55.608 "zoned": false, 00:21:55.608 "supported_io_types": { 00:21:55.608 "read": true, 00:21:55.608 "write": true, 00:21:55.608 "unmap": true, 00:21:55.608 "flush": true, 00:21:55.608 "reset": true, 00:21:55.608 "nvme_admin": false, 00:21:55.608 "nvme_io": false, 00:21:55.608 "nvme_io_md": false, 00:21:55.608 "write_zeroes": true, 00:21:55.608 "zcopy": true, 00:21:55.608 "get_zone_info": false, 00:21:55.608 "zone_management": false, 00:21:55.608 "zone_append": false, 00:21:55.608 "compare": false, 00:21:55.608 "compare_and_write": false, 00:21:55.608 "abort": true, 00:21:55.608 "seek_hole": false, 00:21:55.608 "seek_data": false, 00:21:55.608 "copy": true, 00:21:55.608 "nvme_iov_md": false 00:21:55.608 }, 00:21:55.608 "memory_domains": [ 00:21:55.608 { 00:21:55.608 "dma_device_id": "system", 00:21:55.608 "dma_device_type": 1 00:21:55.608 }, 00:21:55.608 { 00:21:55.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.608 "dma_device_type": 2 00:21:55.608 } 00:21:55.608 ], 00:21:55.608 "driver_specific": {} 00:21:55.608 }' 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.608 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:55.868 10:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.436 "name": "BaseBdev4", 00:21:56.436 "aliases": [ 00:21:56.436 "aba0d73d-5fe1-42c5-a046-bb5ca831414e" 00:21:56.436 ], 00:21:56.436 "product_name": "Malloc disk", 00:21:56.436 "block_size": 512, 00:21:56.436 "num_blocks": 65536, 00:21:56.436 "uuid": "aba0d73d-5fe1-42c5-a046-bb5ca831414e", 00:21:56.436 "assigned_rate_limits": { 00:21:56.436 "rw_ios_per_sec": 0, 00:21:56.436 "rw_mbytes_per_sec": 0, 00:21:56.436 "r_mbytes_per_sec": 0, 00:21:56.436 "w_mbytes_per_sec": 0 00:21:56.436 }, 00:21:56.436 "claimed": true, 00:21:56.436 "claim_type": "exclusive_write", 00:21:56.436 "zoned": false, 00:21:56.436 "supported_io_types": { 00:21:56.436 "read": true, 00:21:56.436 "write": true, 00:21:56.436 "unmap": true, 00:21:56.436 "flush": true, 00:21:56.436 "reset": true, 00:21:56.436 "nvme_admin": false, 00:21:56.436 "nvme_io": false, 00:21:56.436 "nvme_io_md": false, 00:21:56.436 "write_zeroes": true, 00:21:56.436 "zcopy": true, 00:21:56.436 "get_zone_info": false, 00:21:56.436 "zone_management": false, 00:21:56.436 "zone_append": false, 00:21:56.436 "compare": false, 00:21:56.436 "compare_and_write": false, 00:21:56.436 "abort": true, 00:21:56.436 "seek_hole": false, 00:21:56.436 "seek_data": false, 00:21:56.436 "copy": true, 00:21:56.436 "nvme_iov_md": false 00:21:56.436 }, 00:21:56.436 "memory_domains": [ 00:21:56.436 { 00:21:56.436 "dma_device_id": "system", 00:21:56.436 "dma_device_type": 1 00:21:56.436 }, 00:21:56.436 { 00:21:56.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.436 "dma_device_type": 2 00:21:56.436 } 00:21:56.436 ], 00:21:56.436 "driver_specific": {} 00:21:56.436 }' 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:56.436 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.695 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.696 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.696 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.696 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.696 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.696 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:56.955 [2024-07-26 10:32:09.667244] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:56.955 [2024-07-26 10:32:09.667268] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.955 [2024-07-26 10:32:09.667310] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.955 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.956 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.956 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.214 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.214 "name": "Existed_Raid", 00:21:57.214 "uuid": "ea829748-016f-4ae3-96be-c6bb2d4ff8fe", 00:21:57.214 "strip_size_kb": 64, 00:21:57.214 "state": "offline", 00:21:57.214 "raid_level": "concat", 00:21:57.214 "superblock": true, 00:21:57.214 "num_base_bdevs": 4, 00:21:57.214 "num_base_bdevs_discovered": 3, 00:21:57.214 "num_base_bdevs_operational": 3, 00:21:57.214 "base_bdevs_list": [ 00:21:57.214 { 00:21:57.214 "name": null, 00:21:57.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.214 "is_configured": false, 00:21:57.214 "data_offset": 2048, 00:21:57.214 "data_size": 63488 00:21:57.214 }, 00:21:57.214 { 00:21:57.214 "name": "BaseBdev2", 00:21:57.214 "uuid": "535685db-93f6-4357-9efe-c8aed1bb77d5", 00:21:57.214 "is_configured": true, 00:21:57.214 "data_offset": 2048, 00:21:57.214 "data_size": 63488 00:21:57.214 }, 00:21:57.214 { 00:21:57.214 "name": "BaseBdev3", 00:21:57.214 "uuid": "3ca57f73-e648-4191-a97c-0cada76520f8", 00:21:57.214 "is_configured": true, 00:21:57.214 "data_offset": 2048, 00:21:57.214 "data_size": 63488 00:21:57.214 }, 00:21:57.214 { 00:21:57.214 "name": "BaseBdev4", 00:21:57.214 "uuid": "aba0d73d-5fe1-42c5-a046-bb5ca831414e", 00:21:57.214 "is_configured": true, 00:21:57.214 "data_offset": 2048, 00:21:57.214 "data_size": 63488 00:21:57.214 } 00:21:57.214 ] 00:21:57.214 }' 00:21:57.214 10:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.214 10:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.849 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:57.849 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:57.849 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.849 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:57.850 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:57.850 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.850 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:58.108 [2024-07-26 10:32:10.899434] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:58.108 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:58.108 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:58.108 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.108 10:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:58.366 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:58.366 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:58.366 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:58.624 [2024-07-26 10:32:11.350516] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:58.624 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:58.624 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:58.624 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.624 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:58.883 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:58.883 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:58.883 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:59.142 [2024-07-26 10:32:11.805640] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:59.142 [2024-07-26 10:32:11.805682] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x210e3e0 name Existed_Raid, state offline 00:21:59.142 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:59.142 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:59.142 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.142 10:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:59.708 BaseBdev2 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:59.708 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.967 10:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.225 [ 00:22:00.225 { 00:22:00.225 "name": "BaseBdev2", 00:22:00.225 "aliases": [ 00:22:00.225 "6ca1e322-1124-4bd9-9c14-7a87fd779bb5" 00:22:00.225 ], 00:22:00.225 "product_name": "Malloc disk", 00:22:00.225 "block_size": 512, 00:22:00.225 "num_blocks": 65536, 00:22:00.225 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:00.225 "assigned_rate_limits": { 00:22:00.225 "rw_ios_per_sec": 0, 00:22:00.225 "rw_mbytes_per_sec": 0, 00:22:00.225 "r_mbytes_per_sec": 0, 00:22:00.225 "w_mbytes_per_sec": 0 00:22:00.225 }, 00:22:00.225 "claimed": false, 00:22:00.225 "zoned": false, 00:22:00.225 "supported_io_types": { 00:22:00.225 "read": true, 00:22:00.225 "write": true, 00:22:00.225 "unmap": true, 00:22:00.225 "flush": true, 00:22:00.225 "reset": true, 00:22:00.225 "nvme_admin": false, 00:22:00.225 "nvme_io": false, 00:22:00.225 "nvme_io_md": false, 00:22:00.225 "write_zeroes": true, 00:22:00.225 "zcopy": true, 00:22:00.225 "get_zone_info": false, 00:22:00.225 "zone_management": false, 00:22:00.225 "zone_append": false, 00:22:00.225 "compare": false, 00:22:00.225 "compare_and_write": false, 00:22:00.225 "abort": true, 00:22:00.225 "seek_hole": false, 00:22:00.225 "seek_data": false, 00:22:00.225 "copy": true, 00:22:00.225 "nvme_iov_md": false 00:22:00.225 }, 00:22:00.225 "memory_domains": [ 00:22:00.225 { 00:22:00.225 "dma_device_id": "system", 00:22:00.225 "dma_device_type": 1 00:22:00.225 }, 00:22:00.225 { 00:22:00.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.225 "dma_device_type": 2 00:22:00.225 } 00:22:00.225 ], 00:22:00.225 "driver_specific": {} 00:22:00.225 } 00:22:00.225 ] 00:22:00.225 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:00.225 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:00.225 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.225 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.484 BaseBdev3 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:00.484 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.741 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:00.999 [ 00:22:00.999 { 00:22:00.999 "name": "BaseBdev3", 00:22:00.999 "aliases": [ 00:22:00.999 "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448" 00:22:00.999 ], 00:22:00.999 "product_name": "Malloc disk", 00:22:00.999 "block_size": 512, 00:22:00.999 "num_blocks": 65536, 00:22:00.999 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:00.999 "assigned_rate_limits": { 00:22:00.999 "rw_ios_per_sec": 0, 00:22:00.999 "rw_mbytes_per_sec": 0, 00:22:00.999 "r_mbytes_per_sec": 0, 00:22:00.999 "w_mbytes_per_sec": 0 00:22:00.999 }, 00:22:00.999 "claimed": false, 00:22:00.999 "zoned": false, 00:22:00.999 "supported_io_types": { 00:22:00.999 "read": true, 00:22:00.999 "write": true, 00:22:00.999 "unmap": true, 00:22:00.999 "flush": true, 00:22:00.999 "reset": true, 00:22:00.999 "nvme_admin": false, 00:22:00.999 "nvme_io": false, 00:22:00.999 "nvme_io_md": false, 00:22:00.999 "write_zeroes": true, 00:22:00.999 "zcopy": true, 00:22:00.999 "get_zone_info": false, 00:22:00.999 "zone_management": false, 00:22:00.999 "zone_append": false, 00:22:00.999 "compare": false, 00:22:00.999 "compare_and_write": false, 00:22:00.999 "abort": true, 00:22:00.999 "seek_hole": false, 00:22:00.999 "seek_data": false, 00:22:00.999 "copy": true, 00:22:00.999 "nvme_iov_md": false 00:22:00.999 }, 00:22:00.999 "memory_domains": [ 00:22:00.999 { 00:22:00.999 "dma_device_id": "system", 00:22:00.999 "dma_device_type": 1 00:22:00.999 }, 00:22:00.999 { 00:22:00.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.999 "dma_device_type": 2 00:22:00.999 } 00:22:00.999 ], 00:22:00.999 "driver_specific": {} 00:22:00.999 } 00:22:00.999 ] 00:22:00.999 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:00.999 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:00.999 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.999 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:01.256 BaseBdev4 00:22:01.256 10:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:01.256 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:01.256 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:01.256 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:01.256 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:01.257 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:01.257 10:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.257 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:01.514 [ 00:22:01.514 { 00:22:01.514 "name": "BaseBdev4", 00:22:01.514 "aliases": [ 00:22:01.514 "a2a8ba63-e67d-4edb-83d1-97f7d53015d1" 00:22:01.514 ], 00:22:01.514 "product_name": "Malloc disk", 00:22:01.514 "block_size": 512, 00:22:01.514 "num_blocks": 65536, 00:22:01.514 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:01.514 "assigned_rate_limits": { 00:22:01.514 "rw_ios_per_sec": 0, 00:22:01.514 "rw_mbytes_per_sec": 0, 00:22:01.514 "r_mbytes_per_sec": 0, 00:22:01.514 "w_mbytes_per_sec": 0 00:22:01.514 }, 00:22:01.514 "claimed": false, 00:22:01.514 "zoned": false, 00:22:01.514 "supported_io_types": { 00:22:01.514 "read": true, 00:22:01.514 "write": true, 00:22:01.515 "unmap": true, 00:22:01.515 "flush": true, 00:22:01.515 "reset": true, 00:22:01.515 "nvme_admin": false, 00:22:01.515 "nvme_io": false, 00:22:01.515 "nvme_io_md": false, 00:22:01.515 "write_zeroes": true, 00:22:01.515 "zcopy": true, 00:22:01.515 "get_zone_info": false, 00:22:01.515 "zone_management": false, 00:22:01.515 "zone_append": false, 00:22:01.515 "compare": false, 00:22:01.515 "compare_and_write": false, 00:22:01.515 "abort": true, 00:22:01.515 "seek_hole": false, 00:22:01.515 "seek_data": false, 00:22:01.515 "copy": true, 00:22:01.515 "nvme_iov_md": false 00:22:01.515 }, 00:22:01.515 "memory_domains": [ 00:22:01.515 { 00:22:01.515 "dma_device_id": "system", 00:22:01.515 "dma_device_type": 1 00:22:01.515 }, 00:22:01.515 { 00:22:01.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.515 "dma_device_type": 2 00:22:01.515 } 00:22:01.515 ], 00:22:01.515 "driver_specific": {} 00:22:01.515 } 00:22:01.515 ] 00:22:01.515 10:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:01.515 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:01.515 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:01.515 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:01.773 [2024-07-26 10:32:14.596110] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:01.773 [2024-07-26 10:32:14.596155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:01.773 [2024-07-26 10:32:14.596176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.773 [2024-07-26 10:32:14.597392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.773 [2024-07-26 10:32:14.597432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.773 10:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.339 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.339 "name": "Existed_Raid", 00:22:02.339 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:02.339 "strip_size_kb": 64, 00:22:02.339 "state": "configuring", 00:22:02.339 "raid_level": "concat", 00:22:02.339 "superblock": true, 00:22:02.339 "num_base_bdevs": 4, 00:22:02.339 "num_base_bdevs_discovered": 3, 00:22:02.339 "num_base_bdevs_operational": 4, 00:22:02.339 "base_bdevs_list": [ 00:22:02.339 { 00:22:02.339 "name": "BaseBdev1", 00:22:02.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.339 "is_configured": false, 00:22:02.339 "data_offset": 0, 00:22:02.339 "data_size": 0 00:22:02.339 }, 00:22:02.339 { 00:22:02.339 "name": "BaseBdev2", 00:22:02.339 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:02.339 "is_configured": true, 00:22:02.339 "data_offset": 2048, 00:22:02.339 "data_size": 63488 00:22:02.339 }, 00:22:02.339 { 00:22:02.339 "name": "BaseBdev3", 00:22:02.339 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:02.339 "is_configured": true, 00:22:02.339 "data_offset": 2048, 00:22:02.339 "data_size": 63488 00:22:02.339 }, 00:22:02.339 { 00:22:02.339 "name": "BaseBdev4", 00:22:02.339 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:02.339 "is_configured": true, 00:22:02.339 "data_offset": 2048, 00:22:02.339 "data_size": 63488 00:22:02.339 } 00:22:02.339 ] 00:22:02.339 }' 00:22:02.339 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.339 10:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.905 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:03.163 [2024-07-26 10:32:15.843370] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.163 10:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.422 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.422 "name": "Existed_Raid", 00:22:03.422 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:03.422 "strip_size_kb": 64, 00:22:03.422 "state": "configuring", 00:22:03.422 "raid_level": "concat", 00:22:03.422 "superblock": true, 00:22:03.422 "num_base_bdevs": 4, 00:22:03.422 "num_base_bdevs_discovered": 2, 00:22:03.422 "num_base_bdevs_operational": 4, 00:22:03.422 "base_bdevs_list": [ 00:22:03.422 { 00:22:03.422 "name": "BaseBdev1", 00:22:03.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.422 "is_configured": false, 00:22:03.422 "data_offset": 0, 00:22:03.422 "data_size": 0 00:22:03.422 }, 00:22:03.422 { 00:22:03.422 "name": null, 00:22:03.422 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:03.422 "is_configured": false, 00:22:03.422 "data_offset": 2048, 00:22:03.422 "data_size": 63488 00:22:03.422 }, 00:22:03.422 { 00:22:03.422 "name": "BaseBdev3", 00:22:03.422 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:03.422 "is_configured": true, 00:22:03.422 "data_offset": 2048, 00:22:03.422 "data_size": 63488 00:22:03.422 }, 00:22:03.422 { 00:22:03.422 "name": "BaseBdev4", 00:22:03.422 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:03.422 "is_configured": true, 00:22:03.422 "data_offset": 2048, 00:22:03.422 "data_size": 63488 00:22:03.422 } 00:22:03.422 ] 00:22:03.422 }' 00:22:03.422 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.422 10:32:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.021 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.021 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:04.021 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:04.021 10:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:04.279 [2024-07-26 10:32:17.097856] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:04.279 BaseBdev1 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:04.279 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.537 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:04.796 [ 00:22:04.796 { 00:22:04.796 "name": "BaseBdev1", 00:22:04.796 "aliases": [ 00:22:04.796 "06fe3ab6-26aa-4728-800b-1245c1af1ce9" 00:22:04.796 ], 00:22:04.796 "product_name": "Malloc disk", 00:22:04.796 "block_size": 512, 00:22:04.796 "num_blocks": 65536, 00:22:04.796 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:04.796 "assigned_rate_limits": { 00:22:04.796 "rw_ios_per_sec": 0, 00:22:04.796 "rw_mbytes_per_sec": 0, 00:22:04.796 "r_mbytes_per_sec": 0, 00:22:04.796 "w_mbytes_per_sec": 0 00:22:04.796 }, 00:22:04.796 "claimed": true, 00:22:04.796 "claim_type": "exclusive_write", 00:22:04.796 "zoned": false, 00:22:04.796 "supported_io_types": { 00:22:04.796 "read": true, 00:22:04.796 "write": true, 00:22:04.796 "unmap": true, 00:22:04.796 "flush": true, 00:22:04.796 "reset": true, 00:22:04.796 "nvme_admin": false, 00:22:04.796 "nvme_io": false, 00:22:04.796 "nvme_io_md": false, 00:22:04.796 "write_zeroes": true, 00:22:04.796 "zcopy": true, 00:22:04.796 "get_zone_info": false, 00:22:04.796 "zone_management": false, 00:22:04.796 "zone_append": false, 00:22:04.796 "compare": false, 00:22:04.796 "compare_and_write": false, 00:22:04.796 "abort": true, 00:22:04.796 "seek_hole": false, 00:22:04.796 "seek_data": false, 00:22:04.796 "copy": true, 00:22:04.796 "nvme_iov_md": false 00:22:04.796 }, 00:22:04.796 "memory_domains": [ 00:22:04.796 { 00:22:04.796 "dma_device_id": "system", 00:22:04.796 "dma_device_type": 1 00:22:04.796 }, 00:22:04.796 { 00:22:04.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.796 "dma_device_type": 2 00:22:04.796 } 00:22:04.796 ], 00:22:04.796 "driver_specific": {} 00:22:04.796 } 00:22:04.796 ] 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.796 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.055 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.055 "name": "Existed_Raid", 00:22:05.055 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:05.055 "strip_size_kb": 64, 00:22:05.055 "state": "configuring", 00:22:05.055 "raid_level": "concat", 00:22:05.055 "superblock": true, 00:22:05.055 "num_base_bdevs": 4, 00:22:05.055 "num_base_bdevs_discovered": 3, 00:22:05.055 "num_base_bdevs_operational": 4, 00:22:05.055 "base_bdevs_list": [ 00:22:05.055 { 00:22:05.055 "name": "BaseBdev1", 00:22:05.055 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:05.055 "is_configured": true, 00:22:05.055 "data_offset": 2048, 00:22:05.055 "data_size": 63488 00:22:05.055 }, 00:22:05.055 { 00:22:05.055 "name": null, 00:22:05.055 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:05.055 "is_configured": false, 00:22:05.055 "data_offset": 2048, 00:22:05.055 "data_size": 63488 00:22:05.055 }, 00:22:05.055 { 00:22:05.055 "name": "BaseBdev3", 00:22:05.055 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:05.055 "is_configured": true, 00:22:05.055 "data_offset": 2048, 00:22:05.055 "data_size": 63488 00:22:05.055 }, 00:22:05.055 { 00:22:05.055 "name": "BaseBdev4", 00:22:05.055 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:05.055 "is_configured": true, 00:22:05.055 "data_offset": 2048, 00:22:05.055 "data_size": 63488 00:22:05.055 } 00:22:05.055 ] 00:22:05.055 }' 00:22:05.055 10:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.055 10:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.621 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.621 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:05.880 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:05.880 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:06.137 [2024-07-26 10:32:18.786343] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.137 10:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.137 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.137 "name": "Existed_Raid", 00:22:06.137 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:06.137 "strip_size_kb": 64, 00:22:06.137 "state": "configuring", 00:22:06.137 "raid_level": "concat", 00:22:06.137 "superblock": true, 00:22:06.137 "num_base_bdevs": 4, 00:22:06.137 "num_base_bdevs_discovered": 2, 00:22:06.137 "num_base_bdevs_operational": 4, 00:22:06.137 "base_bdevs_list": [ 00:22:06.137 { 00:22:06.137 "name": "BaseBdev1", 00:22:06.137 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:06.137 "is_configured": true, 00:22:06.137 "data_offset": 2048, 00:22:06.137 "data_size": 63488 00:22:06.137 }, 00:22:06.137 { 00:22:06.137 "name": null, 00:22:06.137 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:06.137 "is_configured": false, 00:22:06.137 "data_offset": 2048, 00:22:06.137 "data_size": 63488 00:22:06.137 }, 00:22:06.137 { 00:22:06.137 "name": null, 00:22:06.137 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:06.137 "is_configured": false, 00:22:06.137 "data_offset": 2048, 00:22:06.137 "data_size": 63488 00:22:06.137 }, 00:22:06.137 { 00:22:06.137 "name": "BaseBdev4", 00:22:06.137 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:06.137 "is_configured": true, 00:22:06.137 "data_offset": 2048, 00:22:06.137 "data_size": 63488 00:22:06.137 } 00:22:06.137 ] 00:22:06.137 }' 00:22:06.137 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.137 10:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.070 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.070 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:07.070 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:07.070 10:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:07.327 [2024-07-26 10:32:20.045674] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.327 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.586 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.586 "name": "Existed_Raid", 00:22:07.586 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:07.586 "strip_size_kb": 64, 00:22:07.586 "state": "configuring", 00:22:07.586 "raid_level": "concat", 00:22:07.586 "superblock": true, 00:22:07.586 "num_base_bdevs": 4, 00:22:07.586 "num_base_bdevs_discovered": 3, 00:22:07.586 "num_base_bdevs_operational": 4, 00:22:07.586 "base_bdevs_list": [ 00:22:07.586 { 00:22:07.586 "name": "BaseBdev1", 00:22:07.586 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:07.586 "is_configured": true, 00:22:07.586 "data_offset": 2048, 00:22:07.586 "data_size": 63488 00:22:07.586 }, 00:22:07.586 { 00:22:07.586 "name": null, 00:22:07.586 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:07.586 "is_configured": false, 00:22:07.586 "data_offset": 2048, 00:22:07.586 "data_size": 63488 00:22:07.586 }, 00:22:07.586 { 00:22:07.586 "name": "BaseBdev3", 00:22:07.586 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:07.586 "is_configured": true, 00:22:07.586 "data_offset": 2048, 00:22:07.586 "data_size": 63488 00:22:07.586 }, 00:22:07.586 { 00:22:07.586 "name": "BaseBdev4", 00:22:07.586 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:07.586 "is_configured": true, 00:22:07.586 "data_offset": 2048, 00:22:07.586 "data_size": 63488 00:22:07.586 } 00:22:07.586 ] 00:22:07.586 }' 00:22:07.586 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.586 10:32:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.152 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.152 10:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:08.410 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:08.410 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:08.410 [2024-07-26 10:32:21.313029] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.669 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.927 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.927 "name": "Existed_Raid", 00:22:08.927 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:08.927 "strip_size_kb": 64, 00:22:08.927 "state": "configuring", 00:22:08.927 "raid_level": "concat", 00:22:08.927 "superblock": true, 00:22:08.927 "num_base_bdevs": 4, 00:22:08.927 "num_base_bdevs_discovered": 2, 00:22:08.927 "num_base_bdevs_operational": 4, 00:22:08.927 "base_bdevs_list": [ 00:22:08.927 { 00:22:08.927 "name": null, 00:22:08.927 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:08.927 "is_configured": false, 00:22:08.927 "data_offset": 2048, 00:22:08.927 "data_size": 63488 00:22:08.927 }, 00:22:08.927 { 00:22:08.927 "name": null, 00:22:08.927 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:08.927 "is_configured": false, 00:22:08.927 "data_offset": 2048, 00:22:08.927 "data_size": 63488 00:22:08.927 }, 00:22:08.927 { 00:22:08.927 "name": "BaseBdev3", 00:22:08.927 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:08.927 "is_configured": true, 00:22:08.927 "data_offset": 2048, 00:22:08.927 "data_size": 63488 00:22:08.927 }, 00:22:08.927 { 00:22:08.927 "name": "BaseBdev4", 00:22:08.927 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:08.927 "is_configured": true, 00:22:08.927 "data_offset": 2048, 00:22:08.927 "data_size": 63488 00:22:08.927 } 00:22:08.927 ] 00:22:08.927 }' 00:22:08.927 10:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.927 10:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.494 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.494 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:09.494 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:09.494 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:09.752 [2024-07-26 10:32:22.590595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.752 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.011 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.011 "name": "Existed_Raid", 00:22:10.011 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:10.011 "strip_size_kb": 64, 00:22:10.011 "state": "configuring", 00:22:10.011 "raid_level": "concat", 00:22:10.011 "superblock": true, 00:22:10.011 "num_base_bdevs": 4, 00:22:10.011 "num_base_bdevs_discovered": 3, 00:22:10.011 "num_base_bdevs_operational": 4, 00:22:10.011 "base_bdevs_list": [ 00:22:10.011 { 00:22:10.011 "name": null, 00:22:10.011 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:10.011 "is_configured": false, 00:22:10.011 "data_offset": 2048, 00:22:10.011 "data_size": 63488 00:22:10.011 }, 00:22:10.011 { 00:22:10.011 "name": "BaseBdev2", 00:22:10.011 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:10.011 "is_configured": true, 00:22:10.011 "data_offset": 2048, 00:22:10.011 "data_size": 63488 00:22:10.011 }, 00:22:10.011 { 00:22:10.011 "name": "BaseBdev3", 00:22:10.011 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:10.011 "is_configured": true, 00:22:10.011 "data_offset": 2048, 00:22:10.011 "data_size": 63488 00:22:10.011 }, 00:22:10.011 { 00:22:10.011 "name": "BaseBdev4", 00:22:10.011 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:10.011 "is_configured": true, 00:22:10.011 "data_offset": 2048, 00:22:10.011 "data_size": 63488 00:22:10.011 } 00:22:10.011 ] 00:22:10.011 }' 00:22:10.011 10:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.011 10:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:10.580 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.841 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:10.841 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.841 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:11.099 10:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 06fe3ab6-26aa-4728-800b-1245c1af1ce9 00:22:11.357 [2024-07-26 10:32:24.041459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:11.357 [2024-07-26 10:32:24.041596] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f52cb0 00:22:11.357 [2024-07-26 10:32:24.041608] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:11.357 [2024-07-26 10:32:24.041766] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f481d0 00:22:11.357 [2024-07-26 10:32:24.041868] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f52cb0 00:22:11.357 [2024-07-26 10:32:24.041883] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f52cb0 00:22:11.357 [2024-07-26 10:32:24.041965] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.357 NewBaseBdev 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:11.358 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:11.616 [ 00:22:11.616 { 00:22:11.616 "name": "NewBaseBdev", 00:22:11.616 "aliases": [ 00:22:11.616 "06fe3ab6-26aa-4728-800b-1245c1af1ce9" 00:22:11.616 ], 00:22:11.616 "product_name": "Malloc disk", 00:22:11.616 "block_size": 512, 00:22:11.616 "num_blocks": 65536, 00:22:11.616 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:11.616 "assigned_rate_limits": { 00:22:11.616 "rw_ios_per_sec": 0, 00:22:11.616 "rw_mbytes_per_sec": 0, 00:22:11.616 "r_mbytes_per_sec": 0, 00:22:11.616 "w_mbytes_per_sec": 0 00:22:11.616 }, 00:22:11.616 "claimed": true, 00:22:11.616 "claim_type": "exclusive_write", 00:22:11.616 "zoned": false, 00:22:11.616 "supported_io_types": { 00:22:11.616 "read": true, 00:22:11.616 "write": true, 00:22:11.616 "unmap": true, 00:22:11.616 "flush": true, 00:22:11.616 "reset": true, 00:22:11.616 "nvme_admin": false, 00:22:11.616 "nvme_io": false, 00:22:11.616 "nvme_io_md": false, 00:22:11.616 "write_zeroes": true, 00:22:11.616 "zcopy": true, 00:22:11.616 "get_zone_info": false, 00:22:11.616 "zone_management": false, 00:22:11.616 "zone_append": false, 00:22:11.616 "compare": false, 00:22:11.616 "compare_and_write": false, 00:22:11.616 "abort": true, 00:22:11.616 "seek_hole": false, 00:22:11.616 "seek_data": false, 00:22:11.616 "copy": true, 00:22:11.616 "nvme_iov_md": false 00:22:11.616 }, 00:22:11.616 "memory_domains": [ 00:22:11.616 { 00:22:11.616 "dma_device_id": "system", 00:22:11.616 "dma_device_type": 1 00:22:11.616 }, 00:22:11.616 { 00:22:11.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.616 "dma_device_type": 2 00:22:11.616 } 00:22:11.616 ], 00:22:11.616 "driver_specific": {} 00:22:11.616 } 00:22:11.616 ] 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.616 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.874 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.874 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.874 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.874 "name": "Existed_Raid", 00:22:11.874 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:11.874 "strip_size_kb": 64, 00:22:11.874 "state": "online", 00:22:11.874 "raid_level": "concat", 00:22:11.874 "superblock": true, 00:22:11.874 "num_base_bdevs": 4, 00:22:11.874 "num_base_bdevs_discovered": 4, 00:22:11.874 "num_base_bdevs_operational": 4, 00:22:11.874 "base_bdevs_list": [ 00:22:11.874 { 00:22:11.874 "name": "NewBaseBdev", 00:22:11.874 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:11.874 "is_configured": true, 00:22:11.874 "data_offset": 2048, 00:22:11.874 "data_size": 63488 00:22:11.874 }, 00:22:11.874 { 00:22:11.874 "name": "BaseBdev2", 00:22:11.874 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:11.874 "is_configured": true, 00:22:11.874 "data_offset": 2048, 00:22:11.874 "data_size": 63488 00:22:11.874 }, 00:22:11.874 { 00:22:11.874 "name": "BaseBdev3", 00:22:11.874 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:11.874 "is_configured": true, 00:22:11.874 "data_offset": 2048, 00:22:11.874 "data_size": 63488 00:22:11.874 }, 00:22:11.874 { 00:22:11.874 "name": "BaseBdev4", 00:22:11.874 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:11.874 "is_configured": true, 00:22:11.874 "data_offset": 2048, 00:22:11.874 "data_size": 63488 00:22:11.874 } 00:22:11.874 ] 00:22:11.874 }' 00:22:11.875 10:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.875 10:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:12.441 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:12.699 [2024-07-26 10:32:25.525658] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:12.699 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:12.699 "name": "Existed_Raid", 00:22:12.699 "aliases": [ 00:22:12.699 "383e524c-c0fd-4467-88cd-e9886486e7fd" 00:22:12.699 ], 00:22:12.699 "product_name": "Raid Volume", 00:22:12.699 "block_size": 512, 00:22:12.699 "num_blocks": 253952, 00:22:12.699 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:12.699 "assigned_rate_limits": { 00:22:12.699 "rw_ios_per_sec": 0, 00:22:12.699 "rw_mbytes_per_sec": 0, 00:22:12.699 "r_mbytes_per_sec": 0, 00:22:12.699 "w_mbytes_per_sec": 0 00:22:12.699 }, 00:22:12.699 "claimed": false, 00:22:12.699 "zoned": false, 00:22:12.699 "supported_io_types": { 00:22:12.699 "read": true, 00:22:12.699 "write": true, 00:22:12.699 "unmap": true, 00:22:12.699 "flush": true, 00:22:12.699 "reset": true, 00:22:12.699 "nvme_admin": false, 00:22:12.699 "nvme_io": false, 00:22:12.699 "nvme_io_md": false, 00:22:12.699 "write_zeroes": true, 00:22:12.699 "zcopy": false, 00:22:12.699 "get_zone_info": false, 00:22:12.699 "zone_management": false, 00:22:12.699 "zone_append": false, 00:22:12.699 "compare": false, 00:22:12.699 "compare_and_write": false, 00:22:12.699 "abort": false, 00:22:12.699 "seek_hole": false, 00:22:12.699 "seek_data": false, 00:22:12.699 "copy": false, 00:22:12.699 "nvme_iov_md": false 00:22:12.699 }, 00:22:12.699 "memory_domains": [ 00:22:12.699 { 00:22:12.699 "dma_device_id": "system", 00:22:12.699 "dma_device_type": 1 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.699 "dma_device_type": 2 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "system", 00:22:12.699 "dma_device_type": 1 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.699 "dma_device_type": 2 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "system", 00:22:12.699 "dma_device_type": 1 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.699 "dma_device_type": 2 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "system", 00:22:12.699 "dma_device_type": 1 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.699 "dma_device_type": 2 00:22:12.699 } 00:22:12.699 ], 00:22:12.699 "driver_specific": { 00:22:12.699 "raid": { 00:22:12.699 "uuid": "383e524c-c0fd-4467-88cd-e9886486e7fd", 00:22:12.699 "strip_size_kb": 64, 00:22:12.699 "state": "online", 00:22:12.699 "raid_level": "concat", 00:22:12.699 "superblock": true, 00:22:12.699 "num_base_bdevs": 4, 00:22:12.699 "num_base_bdevs_discovered": 4, 00:22:12.699 "num_base_bdevs_operational": 4, 00:22:12.699 "base_bdevs_list": [ 00:22:12.699 { 00:22:12.699 "name": "NewBaseBdev", 00:22:12.699 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:12.699 "is_configured": true, 00:22:12.699 "data_offset": 2048, 00:22:12.699 "data_size": 63488 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "name": "BaseBdev2", 00:22:12.699 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:12.699 "is_configured": true, 00:22:12.699 "data_offset": 2048, 00:22:12.699 "data_size": 63488 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "name": "BaseBdev3", 00:22:12.699 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:12.699 "is_configured": true, 00:22:12.699 "data_offset": 2048, 00:22:12.699 "data_size": 63488 00:22:12.699 }, 00:22:12.699 { 00:22:12.699 "name": "BaseBdev4", 00:22:12.699 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:12.699 "is_configured": true, 00:22:12.699 "data_offset": 2048, 00:22:12.699 "data_size": 63488 00:22:12.699 } 00:22:12.699 ] 00:22:12.699 } 00:22:12.699 } 00:22:12.699 }' 00:22:12.700 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:12.700 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:12.700 BaseBdev2 00:22:12.700 BaseBdev3 00:22:12.700 BaseBdev4' 00:22:12.700 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:12.700 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:12.700 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:12.958 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:12.958 "name": "NewBaseBdev", 00:22:12.958 "aliases": [ 00:22:12.958 "06fe3ab6-26aa-4728-800b-1245c1af1ce9" 00:22:12.958 ], 00:22:12.958 "product_name": "Malloc disk", 00:22:12.958 "block_size": 512, 00:22:12.958 "num_blocks": 65536, 00:22:12.958 "uuid": "06fe3ab6-26aa-4728-800b-1245c1af1ce9", 00:22:12.958 "assigned_rate_limits": { 00:22:12.958 "rw_ios_per_sec": 0, 00:22:12.958 "rw_mbytes_per_sec": 0, 00:22:12.958 "r_mbytes_per_sec": 0, 00:22:12.958 "w_mbytes_per_sec": 0 00:22:12.958 }, 00:22:12.958 "claimed": true, 00:22:12.958 "claim_type": "exclusive_write", 00:22:12.958 "zoned": false, 00:22:12.958 "supported_io_types": { 00:22:12.958 "read": true, 00:22:12.958 "write": true, 00:22:12.958 "unmap": true, 00:22:12.958 "flush": true, 00:22:12.958 "reset": true, 00:22:12.958 "nvme_admin": false, 00:22:12.958 "nvme_io": false, 00:22:12.958 "nvme_io_md": false, 00:22:12.958 "write_zeroes": true, 00:22:12.958 "zcopy": true, 00:22:12.958 "get_zone_info": false, 00:22:12.958 "zone_management": false, 00:22:12.958 "zone_append": false, 00:22:12.958 "compare": false, 00:22:12.958 "compare_and_write": false, 00:22:12.958 "abort": true, 00:22:12.958 "seek_hole": false, 00:22:12.958 "seek_data": false, 00:22:12.958 "copy": true, 00:22:12.958 "nvme_iov_md": false 00:22:12.958 }, 00:22:12.958 "memory_domains": [ 00:22:12.958 { 00:22:12.958 "dma_device_id": "system", 00:22:12.958 "dma_device_type": 1 00:22:12.958 }, 00:22:12.958 { 00:22:12.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.958 "dma_device_type": 2 00:22:12.958 } 00:22:12.958 ], 00:22:12.958 "driver_specific": {} 00:22:12.958 }' 00:22:12.958 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:12.958 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.216 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.216 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.216 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.216 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.216 10:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.216 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.216 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:13.217 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.217 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.475 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:13.475 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.475 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:13.475 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:13.475 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:13.475 "name": "BaseBdev2", 00:22:13.475 "aliases": [ 00:22:13.475 "6ca1e322-1124-4bd9-9c14-7a87fd779bb5" 00:22:13.475 ], 00:22:13.475 "product_name": "Malloc disk", 00:22:13.475 "block_size": 512, 00:22:13.475 "num_blocks": 65536, 00:22:13.475 "uuid": "6ca1e322-1124-4bd9-9c14-7a87fd779bb5", 00:22:13.475 "assigned_rate_limits": { 00:22:13.475 "rw_ios_per_sec": 0, 00:22:13.475 "rw_mbytes_per_sec": 0, 00:22:13.475 "r_mbytes_per_sec": 0, 00:22:13.475 "w_mbytes_per_sec": 0 00:22:13.475 }, 00:22:13.475 "claimed": true, 00:22:13.475 "claim_type": "exclusive_write", 00:22:13.475 "zoned": false, 00:22:13.475 "supported_io_types": { 00:22:13.475 "read": true, 00:22:13.475 "write": true, 00:22:13.475 "unmap": true, 00:22:13.475 "flush": true, 00:22:13.475 "reset": true, 00:22:13.475 "nvme_admin": false, 00:22:13.475 "nvme_io": false, 00:22:13.475 "nvme_io_md": false, 00:22:13.475 "write_zeroes": true, 00:22:13.475 "zcopy": true, 00:22:13.475 "get_zone_info": false, 00:22:13.475 "zone_management": false, 00:22:13.475 "zone_append": false, 00:22:13.475 "compare": false, 00:22:13.475 "compare_and_write": false, 00:22:13.475 "abort": true, 00:22:13.475 "seek_hole": false, 00:22:13.475 "seek_data": false, 00:22:13.475 "copy": true, 00:22:13.475 "nvme_iov_md": false 00:22:13.475 }, 00:22:13.475 "memory_domains": [ 00:22:13.475 { 00:22:13.475 "dma_device_id": "system", 00:22:13.475 "dma_device_type": 1 00:22:13.475 }, 00:22:13.475 { 00:22:13.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.475 "dma_device_type": 2 00:22:13.475 } 00:22:13.475 ], 00:22:13.475 "driver_specific": {} 00:22:13.475 }' 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.733 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:13.992 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.250 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.250 "name": "BaseBdev3", 00:22:14.250 "aliases": [ 00:22:14.250 "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448" 00:22:14.250 ], 00:22:14.250 "product_name": "Malloc disk", 00:22:14.250 "block_size": 512, 00:22:14.250 "num_blocks": 65536, 00:22:14.250 "uuid": "1ce4b5fc-ba37-4bfb-af0e-ddc1f58d4448", 00:22:14.250 "assigned_rate_limits": { 00:22:14.250 "rw_ios_per_sec": 0, 00:22:14.250 "rw_mbytes_per_sec": 0, 00:22:14.250 "r_mbytes_per_sec": 0, 00:22:14.250 "w_mbytes_per_sec": 0 00:22:14.250 }, 00:22:14.250 "claimed": true, 00:22:14.250 "claim_type": "exclusive_write", 00:22:14.250 "zoned": false, 00:22:14.250 "supported_io_types": { 00:22:14.250 "read": true, 00:22:14.250 "write": true, 00:22:14.250 "unmap": true, 00:22:14.250 "flush": true, 00:22:14.250 "reset": true, 00:22:14.250 "nvme_admin": false, 00:22:14.250 "nvme_io": false, 00:22:14.250 "nvme_io_md": false, 00:22:14.250 "write_zeroes": true, 00:22:14.250 "zcopy": true, 00:22:14.250 "get_zone_info": false, 00:22:14.250 "zone_management": false, 00:22:14.250 "zone_append": false, 00:22:14.250 "compare": false, 00:22:14.250 "compare_and_write": false, 00:22:14.250 "abort": true, 00:22:14.250 "seek_hole": false, 00:22:14.250 "seek_data": false, 00:22:14.250 "copy": true, 00:22:14.250 "nvme_iov_md": false 00:22:14.250 }, 00:22:14.250 "memory_domains": [ 00:22:14.250 { 00:22:14.250 "dma_device_id": "system", 00:22:14.250 "dma_device_type": 1 00:22:14.250 }, 00:22:14.250 { 00:22:14.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.250 "dma_device_type": 2 00:22:14.250 } 00:22:14.250 ], 00:22:14.250 "driver_specific": {} 00:22:14.250 }' 00:22:14.250 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.250 10:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.250 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:14.509 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.767 "name": "BaseBdev4", 00:22:14.767 "aliases": [ 00:22:14.767 "a2a8ba63-e67d-4edb-83d1-97f7d53015d1" 00:22:14.767 ], 00:22:14.767 "product_name": "Malloc disk", 00:22:14.767 "block_size": 512, 00:22:14.767 "num_blocks": 65536, 00:22:14.767 "uuid": "a2a8ba63-e67d-4edb-83d1-97f7d53015d1", 00:22:14.767 "assigned_rate_limits": { 00:22:14.767 "rw_ios_per_sec": 0, 00:22:14.767 "rw_mbytes_per_sec": 0, 00:22:14.767 "r_mbytes_per_sec": 0, 00:22:14.767 "w_mbytes_per_sec": 0 00:22:14.767 }, 00:22:14.767 "claimed": true, 00:22:14.767 "claim_type": "exclusive_write", 00:22:14.767 "zoned": false, 00:22:14.767 "supported_io_types": { 00:22:14.767 "read": true, 00:22:14.767 "write": true, 00:22:14.767 "unmap": true, 00:22:14.767 "flush": true, 00:22:14.767 "reset": true, 00:22:14.767 "nvme_admin": false, 00:22:14.767 "nvme_io": false, 00:22:14.767 "nvme_io_md": false, 00:22:14.767 "write_zeroes": true, 00:22:14.767 "zcopy": true, 00:22:14.767 "get_zone_info": false, 00:22:14.767 "zone_management": false, 00:22:14.767 "zone_append": false, 00:22:14.767 "compare": false, 00:22:14.767 "compare_and_write": false, 00:22:14.767 "abort": true, 00:22:14.767 "seek_hole": false, 00:22:14.767 "seek_data": false, 00:22:14.767 "copy": true, 00:22:14.767 "nvme_iov_md": false 00:22:14.767 }, 00:22:14.767 "memory_domains": [ 00:22:14.767 { 00:22:14.767 "dma_device_id": "system", 00:22:14.767 "dma_device_type": 1 00:22:14.767 }, 00:22:14.767 { 00:22:14.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.767 "dma_device_type": 2 00:22:14.767 } 00:22:14.767 ], 00:22:14.767 "driver_specific": {} 00:22:14.767 }' 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.767 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.025 10:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.284 [2024-07-26 10:32:28.031989] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.284 [2024-07-26 10:32:28.032012] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.284 [2024-07-26 10:32:28.032061] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.284 [2024-07-26 10:32:28.032116] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.284 [2024-07-26 10:32:28.032126] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f52cb0 name Existed_Raid, state offline 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3441713 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3441713 ']' 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3441713 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3441713 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3441713' 00:22:15.284 killing process with pid 3441713 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3441713 00:22:15.284 [2024-07-26 10:32:28.104353] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.284 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3441713 00:22:15.284 [2024-07-26 10:32:28.135284] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.542 10:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:15.542 00:22:15.542 real 0m31.020s 00:22:15.542 user 0m56.918s 00:22:15.542 sys 0m5.629s 00:22:15.542 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.542 10:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.542 ************************************ 00:22:15.542 END TEST raid_state_function_test_sb 00:22:15.543 ************************************ 00:22:15.543 10:32:28 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:22:15.543 10:32:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:15.543 10:32:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.543 10:32:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.543 ************************************ 00:22:15.543 START TEST raid_superblock_test 00:22:15.543 ************************************ 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3447591 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3447591 /var/tmp/spdk-raid.sock 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3447591 ']' 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:15.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.543 10:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:15.543 [2024-07-26 10:32:28.442564] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:15.543 [2024-07-26 10:32:28.442621] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447591 ] 00:22:15.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.814 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:15.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.814 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:15.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.814 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:15.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.814 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:15.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:15.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:15.815 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:15.815 [2024-07-26 10:32:28.576231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.815 [2024-07-26 10:32:28.620806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.815 [2024-07-26 10:32:28.681686] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:15.815 [2024-07-26 10:32:28.681721] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.392 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:16.649 malloc1 00:22:16.650 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:16.908 [2024-07-26 10:32:29.621441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:16.908 [2024-07-26 10:32:29.621486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.908 [2024-07-26 10:32:29.621505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd8270 00:22:16.908 [2024-07-26 10:32:29.621517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.908 [2024-07-26 10:32:29.623012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.908 [2024-07-26 10:32:29.623039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:16.908 pt1 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:16.908 malloc2 00:22:16.908 10:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.474 [2024-07-26 10:32:30.283535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.474 [2024-07-26 10:32:30.283585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.474 [2024-07-26 10:32:30.283602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f942f0 00:22:17.474 [2024-07-26 10:32:30.283613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.474 [2024-07-26 10:32:30.285125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.474 [2024-07-26 10:32:30.285160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.474 pt2 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.474 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:17.732 malloc3 00:22:17.732 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.990 [2024-07-26 10:32:30.676833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.990 [2024-07-26 10:32:30.676876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.990 [2024-07-26 10:32:30.676892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5e650 00:22:17.990 [2024-07-26 10:32:30.676904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.990 [2024-07-26 10:32:30.678415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.990 [2024-07-26 10:32:30.678441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.990 pt3 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.990 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:18.247 malloc4 00:22:18.247 10:32:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:18.247 [2024-07-26 10:32:31.066178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:18.247 [2024-07-26 10:32:31.066218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.247 [2024-07-26 10:32:31.066234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5fce0 00:22:18.247 [2024-07-26 10:32:31.066246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.247 [2024-07-26 10:32:31.067557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.247 [2024-07-26 10:32:31.067582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:18.247 pt4 00:22:18.247 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:18.247 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:18.247 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:18.505 [2024-07-26 10:32:31.278757] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.505 [2024-07-26 10:32:31.279900] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.505 [2024-07-26 10:32:31.279948] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:18.505 [2024-07-26 10:32:31.279990] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:18.505 [2024-07-26 10:32:31.280130] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f626e0 00:22:18.505 [2024-07-26 10:32:31.280146] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:18.505 [2024-07-26 10:32:31.280326] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e3d320 00:22:18.505 [2024-07-26 10:32:31.280450] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f626e0 00:22:18.505 [2024-07-26 10:32:31.280459] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f626e0 00:22:18.505 [2024-07-26 10:32:31.280556] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.505 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.764 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.764 "name": "raid_bdev1", 00:22:18.764 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:18.764 "strip_size_kb": 64, 00:22:18.764 "state": "online", 00:22:18.764 "raid_level": "concat", 00:22:18.764 "superblock": true, 00:22:18.764 "num_base_bdevs": 4, 00:22:18.764 "num_base_bdevs_discovered": 4, 00:22:18.764 "num_base_bdevs_operational": 4, 00:22:18.764 "base_bdevs_list": [ 00:22:18.764 { 00:22:18.764 "name": "pt1", 00:22:18.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 2048, 00:22:18.764 "data_size": 63488 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "pt2", 00:22:18.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 2048, 00:22:18.764 "data_size": 63488 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "pt3", 00:22:18.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 2048, 00:22:18.764 "data_size": 63488 00:22:18.764 }, 00:22:18.764 { 00:22:18.764 "name": "pt4", 00:22:18.764 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:18.764 "is_configured": true, 00:22:18.764 "data_offset": 2048, 00:22:18.764 "data_size": 63488 00:22:18.764 } 00:22:18.764 ] 00:22:18.764 }' 00:22:18.764 10:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.764 10:32:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:19.330 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:19.589 [2024-07-26 10:32:32.273648] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:19.589 "name": "raid_bdev1", 00:22:19.589 "aliases": [ 00:22:19.589 "3f49f975-3cad-40ac-91ec-d620216ec1b3" 00:22:19.589 ], 00:22:19.589 "product_name": "Raid Volume", 00:22:19.589 "block_size": 512, 00:22:19.589 "num_blocks": 253952, 00:22:19.589 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:19.589 "assigned_rate_limits": { 00:22:19.589 "rw_ios_per_sec": 0, 00:22:19.589 "rw_mbytes_per_sec": 0, 00:22:19.589 "r_mbytes_per_sec": 0, 00:22:19.589 "w_mbytes_per_sec": 0 00:22:19.589 }, 00:22:19.589 "claimed": false, 00:22:19.589 "zoned": false, 00:22:19.589 "supported_io_types": { 00:22:19.589 "read": true, 00:22:19.589 "write": true, 00:22:19.589 "unmap": true, 00:22:19.589 "flush": true, 00:22:19.589 "reset": true, 00:22:19.589 "nvme_admin": false, 00:22:19.589 "nvme_io": false, 00:22:19.589 "nvme_io_md": false, 00:22:19.589 "write_zeroes": true, 00:22:19.589 "zcopy": false, 00:22:19.589 "get_zone_info": false, 00:22:19.589 "zone_management": false, 00:22:19.589 "zone_append": false, 00:22:19.589 "compare": false, 00:22:19.589 "compare_and_write": false, 00:22:19.589 "abort": false, 00:22:19.589 "seek_hole": false, 00:22:19.589 "seek_data": false, 00:22:19.589 "copy": false, 00:22:19.589 "nvme_iov_md": false 00:22:19.589 }, 00:22:19.589 "memory_domains": [ 00:22:19.589 { 00:22:19.589 "dma_device_id": "system", 00:22:19.589 "dma_device_type": 1 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.589 "dma_device_type": 2 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "system", 00:22:19.589 "dma_device_type": 1 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.589 "dma_device_type": 2 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "system", 00:22:19.589 "dma_device_type": 1 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.589 "dma_device_type": 2 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "system", 00:22:19.589 "dma_device_type": 1 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.589 "dma_device_type": 2 00:22:19.589 } 00:22:19.589 ], 00:22:19.589 "driver_specific": { 00:22:19.589 "raid": { 00:22:19.589 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:19.589 "strip_size_kb": 64, 00:22:19.589 "state": "online", 00:22:19.589 "raid_level": "concat", 00:22:19.589 "superblock": true, 00:22:19.589 "num_base_bdevs": 4, 00:22:19.589 "num_base_bdevs_discovered": 4, 00:22:19.589 "num_base_bdevs_operational": 4, 00:22:19.589 "base_bdevs_list": [ 00:22:19.589 { 00:22:19.589 "name": "pt1", 00:22:19.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.589 "is_configured": true, 00:22:19.589 "data_offset": 2048, 00:22:19.589 "data_size": 63488 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "name": "pt2", 00:22:19.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.589 "is_configured": true, 00:22:19.589 "data_offset": 2048, 00:22:19.589 "data_size": 63488 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "name": "pt3", 00:22:19.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.589 "is_configured": true, 00:22:19.589 "data_offset": 2048, 00:22:19.589 "data_size": 63488 00:22:19.589 }, 00:22:19.589 { 00:22:19.589 "name": "pt4", 00:22:19.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.589 "is_configured": true, 00:22:19.589 "data_offset": 2048, 00:22:19.589 "data_size": 63488 00:22:19.589 } 00:22:19.589 ] 00:22:19.589 } 00:22:19.589 } 00:22:19.589 }' 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:19.589 pt2 00:22:19.589 pt3 00:22:19.589 pt4' 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:19.589 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.847 "name": "pt1", 00:22:19.847 "aliases": [ 00:22:19.847 "00000000-0000-0000-0000-000000000001" 00:22:19.847 ], 00:22:19.847 "product_name": "passthru", 00:22:19.847 "block_size": 512, 00:22:19.847 "num_blocks": 65536, 00:22:19.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.847 "assigned_rate_limits": { 00:22:19.847 "rw_ios_per_sec": 0, 00:22:19.847 "rw_mbytes_per_sec": 0, 00:22:19.847 "r_mbytes_per_sec": 0, 00:22:19.847 "w_mbytes_per_sec": 0 00:22:19.847 }, 00:22:19.847 "claimed": true, 00:22:19.847 "claim_type": "exclusive_write", 00:22:19.847 "zoned": false, 00:22:19.847 "supported_io_types": { 00:22:19.847 "read": true, 00:22:19.847 "write": true, 00:22:19.847 "unmap": true, 00:22:19.847 "flush": true, 00:22:19.847 "reset": true, 00:22:19.847 "nvme_admin": false, 00:22:19.847 "nvme_io": false, 00:22:19.847 "nvme_io_md": false, 00:22:19.847 "write_zeroes": true, 00:22:19.847 "zcopy": true, 00:22:19.847 "get_zone_info": false, 00:22:19.847 "zone_management": false, 00:22:19.847 "zone_append": false, 00:22:19.847 "compare": false, 00:22:19.847 "compare_and_write": false, 00:22:19.847 "abort": true, 00:22:19.847 "seek_hole": false, 00:22:19.847 "seek_data": false, 00:22:19.847 "copy": true, 00:22:19.847 "nvme_iov_md": false 00:22:19.847 }, 00:22:19.847 "memory_domains": [ 00:22:19.847 { 00:22:19.847 "dma_device_id": "system", 00:22:19.847 "dma_device_type": 1 00:22:19.847 }, 00:22:19.847 { 00:22:19.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.847 "dma_device_type": 2 00:22:19.847 } 00:22:19.847 ], 00:22:19.847 "driver_specific": { 00:22:19.847 "passthru": { 00:22:19.847 "name": "pt1", 00:22:19.847 "base_bdev_name": "malloc1" 00:22:19.847 } 00:22:19.847 } 00:22:19.847 }' 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.847 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:20.105 10:32:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.363 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.363 "name": "pt2", 00:22:20.363 "aliases": [ 00:22:20.363 "00000000-0000-0000-0000-000000000002" 00:22:20.363 ], 00:22:20.363 "product_name": "passthru", 00:22:20.363 "block_size": 512, 00:22:20.363 "num_blocks": 65536, 00:22:20.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.363 "assigned_rate_limits": { 00:22:20.363 "rw_ios_per_sec": 0, 00:22:20.363 "rw_mbytes_per_sec": 0, 00:22:20.363 "r_mbytes_per_sec": 0, 00:22:20.363 "w_mbytes_per_sec": 0 00:22:20.363 }, 00:22:20.363 "claimed": true, 00:22:20.363 "claim_type": "exclusive_write", 00:22:20.363 "zoned": false, 00:22:20.363 "supported_io_types": { 00:22:20.363 "read": true, 00:22:20.363 "write": true, 00:22:20.363 "unmap": true, 00:22:20.363 "flush": true, 00:22:20.363 "reset": true, 00:22:20.363 "nvme_admin": false, 00:22:20.363 "nvme_io": false, 00:22:20.363 "nvme_io_md": false, 00:22:20.363 "write_zeroes": true, 00:22:20.363 "zcopy": true, 00:22:20.363 "get_zone_info": false, 00:22:20.363 "zone_management": false, 00:22:20.363 "zone_append": false, 00:22:20.363 "compare": false, 00:22:20.363 "compare_and_write": false, 00:22:20.363 "abort": true, 00:22:20.363 "seek_hole": false, 00:22:20.363 "seek_data": false, 00:22:20.363 "copy": true, 00:22:20.363 "nvme_iov_md": false 00:22:20.363 }, 00:22:20.363 "memory_domains": [ 00:22:20.363 { 00:22:20.363 "dma_device_id": "system", 00:22:20.363 "dma_device_type": 1 00:22:20.363 }, 00:22:20.363 { 00:22:20.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.363 "dma_device_type": 2 00:22:20.363 } 00:22:20.363 ], 00:22:20.363 "driver_specific": { 00:22:20.363 "passthru": { 00:22:20.363 "name": "pt2", 00:22:20.363 "base_bdev_name": "malloc2" 00:22:20.363 } 00:22:20.363 } 00:22:20.363 }' 00:22:20.363 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.363 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:20.621 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.879 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.879 "name": "pt3", 00:22:20.879 "aliases": [ 00:22:20.879 "00000000-0000-0000-0000-000000000003" 00:22:20.879 ], 00:22:20.879 "product_name": "passthru", 00:22:20.879 "block_size": 512, 00:22:20.879 "num_blocks": 65536, 00:22:20.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.879 "assigned_rate_limits": { 00:22:20.879 "rw_ios_per_sec": 0, 00:22:20.879 "rw_mbytes_per_sec": 0, 00:22:20.879 "r_mbytes_per_sec": 0, 00:22:20.879 "w_mbytes_per_sec": 0 00:22:20.879 }, 00:22:20.879 "claimed": true, 00:22:20.879 "claim_type": "exclusive_write", 00:22:20.879 "zoned": false, 00:22:20.879 "supported_io_types": { 00:22:20.879 "read": true, 00:22:20.879 "write": true, 00:22:20.879 "unmap": true, 00:22:20.879 "flush": true, 00:22:20.879 "reset": true, 00:22:20.879 "nvme_admin": false, 00:22:20.879 "nvme_io": false, 00:22:20.879 "nvme_io_md": false, 00:22:20.879 "write_zeroes": true, 00:22:20.879 "zcopy": true, 00:22:20.879 "get_zone_info": false, 00:22:20.879 "zone_management": false, 00:22:20.879 "zone_append": false, 00:22:20.879 "compare": false, 00:22:20.879 "compare_and_write": false, 00:22:20.879 "abort": true, 00:22:20.879 "seek_hole": false, 00:22:20.879 "seek_data": false, 00:22:20.879 "copy": true, 00:22:20.879 "nvme_iov_md": false 00:22:20.879 }, 00:22:20.879 "memory_domains": [ 00:22:20.879 { 00:22:20.879 "dma_device_id": "system", 00:22:20.879 "dma_device_type": 1 00:22:20.879 }, 00:22:20.879 { 00:22:20.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.879 "dma_device_type": 2 00:22:20.879 } 00:22:20.879 ], 00:22:20.879 "driver_specific": { 00:22:20.879 "passthru": { 00:22:20.879 "name": "pt3", 00:22:20.879 "base_bdev_name": "malloc3" 00:22:20.879 } 00:22:20.879 } 00:22:20.879 }' 00:22:20.879 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.879 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.136 10:32:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.136 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.393 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.393 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.393 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:21.393 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.651 "name": "pt4", 00:22:21.651 "aliases": [ 00:22:21.651 "00000000-0000-0000-0000-000000000004" 00:22:21.651 ], 00:22:21.651 "product_name": "passthru", 00:22:21.651 "block_size": 512, 00:22:21.651 "num_blocks": 65536, 00:22:21.651 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:21.651 "assigned_rate_limits": { 00:22:21.651 "rw_ios_per_sec": 0, 00:22:21.651 "rw_mbytes_per_sec": 0, 00:22:21.651 "r_mbytes_per_sec": 0, 00:22:21.651 "w_mbytes_per_sec": 0 00:22:21.651 }, 00:22:21.651 "claimed": true, 00:22:21.651 "claim_type": "exclusive_write", 00:22:21.651 "zoned": false, 00:22:21.651 "supported_io_types": { 00:22:21.651 "read": true, 00:22:21.651 "write": true, 00:22:21.651 "unmap": true, 00:22:21.651 "flush": true, 00:22:21.651 "reset": true, 00:22:21.651 "nvme_admin": false, 00:22:21.651 "nvme_io": false, 00:22:21.651 "nvme_io_md": false, 00:22:21.651 "write_zeroes": true, 00:22:21.651 "zcopy": true, 00:22:21.651 "get_zone_info": false, 00:22:21.651 "zone_management": false, 00:22:21.651 "zone_append": false, 00:22:21.651 "compare": false, 00:22:21.651 "compare_and_write": false, 00:22:21.651 "abort": true, 00:22:21.651 "seek_hole": false, 00:22:21.651 "seek_data": false, 00:22:21.651 "copy": true, 00:22:21.651 "nvme_iov_md": false 00:22:21.651 }, 00:22:21.651 "memory_domains": [ 00:22:21.651 { 00:22:21.651 "dma_device_id": "system", 00:22:21.651 "dma_device_type": 1 00:22:21.651 }, 00:22:21.651 { 00:22:21.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.651 "dma_device_type": 2 00:22:21.651 } 00:22:21.651 ], 00:22:21.651 "driver_specific": { 00:22:21.651 "passthru": { 00:22:21.651 "name": "pt4", 00:22:21.651 "base_bdev_name": "malloc4" 00:22:21.651 } 00:22:21.651 } 00:22:21.651 }' 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.651 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.910 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.910 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.910 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:21.910 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.168 [2024-07-26 10:32:34.852461] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.168 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=3f49f975-3cad-40ac-91ec-d620216ec1b3 00:22:22.168 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 3f49f975-3cad-40ac-91ec-d620216ec1b3 ']' 00:22:22.168 10:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:22.425 [2024-07-26 10:32:35.080763] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.426 [2024-07-26 10:32:35.080780] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.426 [2024-07-26 10:32:35.080823] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.426 [2024-07-26 10:32:35.080883] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.426 [2024-07-26 10:32:35.080893] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f626e0 name raid_bdev1, state offline 00:22:22.426 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.426 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:22.684 10:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:23.250 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:23.250 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:23.509 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:23.509 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:24.076 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:24.076 10:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:24.334 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:24.592 [2024-07-26 10:32:37.250351] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:24.592 [2024-07-26 10:32:37.251594] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:24.592 [2024-07-26 10:32:37.251634] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:24.592 [2024-07-26 10:32:37.251666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:24.592 [2024-07-26 10:32:37.251706] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:24.592 [2024-07-26 10:32:37.251743] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:24.592 [2024-07-26 10:32:37.251765] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:24.592 [2024-07-26 10:32:37.251785] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:24.592 [2024-07-26 10:32:37.251803] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.592 [2024-07-26 10:32:37.251812] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f626e0 name raid_bdev1, state configuring 00:22:24.592 request: 00:22:24.592 { 00:22:24.592 "name": "raid_bdev1", 00:22:24.592 "raid_level": "concat", 00:22:24.592 "base_bdevs": [ 00:22:24.592 "malloc1", 00:22:24.592 "malloc2", 00:22:24.592 "malloc3", 00:22:24.592 "malloc4" 00:22:24.592 ], 00:22:24.592 "strip_size_kb": 64, 00:22:24.592 "superblock": false, 00:22:24.592 "method": "bdev_raid_create", 00:22:24.592 "req_id": 1 00:22:24.592 } 00:22:24.592 Got JSON-RPC error response 00:22:24.592 response: 00:22:24.592 { 00:22:24.592 "code": -17, 00:22:24.592 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:24.592 } 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.592 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:24.850 [2024-07-26 10:32:37.707494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:24.850 [2024-07-26 10:32:37.707536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.850 [2024-07-26 10:32:37.707558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd85b0 00:22:24.850 [2024-07-26 10:32:37.707569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.850 [2024-07-26 10:32:37.709026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.850 [2024-07-26 10:32:37.709052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:24.850 [2024-07-26 10:32:37.709117] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:24.850 [2024-07-26 10:32:37.709150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:24.850 pt1 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.850 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.108 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.108 "name": "raid_bdev1", 00:22:25.108 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:25.108 "strip_size_kb": 64, 00:22:25.108 "state": "configuring", 00:22:25.108 "raid_level": "concat", 00:22:25.108 "superblock": true, 00:22:25.108 "num_base_bdevs": 4, 00:22:25.108 "num_base_bdevs_discovered": 1, 00:22:25.108 "num_base_bdevs_operational": 4, 00:22:25.108 "base_bdevs_list": [ 00:22:25.108 { 00:22:25.108 "name": "pt1", 00:22:25.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:25.108 "is_configured": true, 00:22:25.108 "data_offset": 2048, 00:22:25.108 "data_size": 63488 00:22:25.108 }, 00:22:25.108 { 00:22:25.108 "name": null, 00:22:25.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.108 "is_configured": false, 00:22:25.108 "data_offset": 2048, 00:22:25.108 "data_size": 63488 00:22:25.108 }, 00:22:25.108 { 00:22:25.108 "name": null, 00:22:25.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:25.108 "is_configured": false, 00:22:25.108 "data_offset": 2048, 00:22:25.108 "data_size": 63488 00:22:25.108 }, 00:22:25.108 { 00:22:25.108 "name": null, 00:22:25.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:25.108 "is_configured": false, 00:22:25.108 "data_offset": 2048, 00:22:25.108 "data_size": 63488 00:22:25.108 } 00:22:25.108 ] 00:22:25.108 }' 00:22:25.108 10:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.108 10:32:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.674 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:22:25.674 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.932 [2024-07-26 10:32:38.738235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.932 [2024-07-26 10:32:38.738291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.932 [2024-07-26 10:32:38.738320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f93a60 00:22:25.932 [2024-07-26 10:32:38.738345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.932 [2024-07-26 10:32:38.738662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.932 [2024-07-26 10:32:38.738685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.932 [2024-07-26 10:32:38.738743] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:25.932 [2024-07-26 10:32:38.738760] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.932 pt2 00:22:25.932 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:26.191 [2024-07-26 10:32:38.950794] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.191 10:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.449 10:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.449 "name": "raid_bdev1", 00:22:26.449 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:26.449 "strip_size_kb": 64, 00:22:26.449 "state": "configuring", 00:22:26.449 "raid_level": "concat", 00:22:26.449 "superblock": true, 00:22:26.449 "num_base_bdevs": 4, 00:22:26.449 "num_base_bdevs_discovered": 1, 00:22:26.449 "num_base_bdevs_operational": 4, 00:22:26.449 "base_bdevs_list": [ 00:22:26.449 { 00:22:26.449 "name": "pt1", 00:22:26.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.449 "is_configured": true, 00:22:26.449 "data_offset": 2048, 00:22:26.449 "data_size": 63488 00:22:26.449 }, 00:22:26.449 { 00:22:26.449 "name": null, 00:22:26.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.449 "is_configured": false, 00:22:26.449 "data_offset": 2048, 00:22:26.449 "data_size": 63488 00:22:26.449 }, 00:22:26.449 { 00:22:26.449 "name": null, 00:22:26.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.449 "is_configured": false, 00:22:26.449 "data_offset": 2048, 00:22:26.449 "data_size": 63488 00:22:26.449 }, 00:22:26.449 { 00:22:26.449 "name": null, 00:22:26.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:26.449 "is_configured": false, 00:22:26.449 "data_offset": 2048, 00:22:26.449 "data_size": 63488 00:22:26.449 } 00:22:26.449 ] 00:22:26.449 }' 00:22:26.449 10:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.449 10:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.015 10:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:27.015 10:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:27.015 10:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:27.273 [2024-07-26 10:32:39.981510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:27.273 [2024-07-26 10:32:39.981556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.273 [2024-07-26 10:32:39.981573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f626e0 00:22:27.273 [2024-07-26 10:32:39.981584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.273 [2024-07-26 10:32:39.981893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.273 [2024-07-26 10:32:39.981910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:27.273 [2024-07-26 10:32:39.981969] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:27.273 [2024-07-26 10:32:39.981986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:27.273 pt2 00:22:27.273 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:27.273 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:27.273 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:27.531 [2024-07-26 10:32:40.214125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:27.531 [2024-07-26 10:32:40.214180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.531 [2024-07-26 10:32:40.214199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f63410 00:22:27.531 [2024-07-26 10:32:40.214211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.531 [2024-07-26 10:32:40.214511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.531 [2024-07-26 10:32:40.214529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:27.531 [2024-07-26 10:32:40.214582] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:27.531 [2024-07-26 10:32:40.214599] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:27.531 pt3 00:22:27.531 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:27.531 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:27.531 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:27.790 [2024-07-26 10:32:40.442722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:27.790 [2024-07-26 10:32:40.442758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.790 [2024-07-26 10:32:40.442773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f61230 00:22:27.790 [2024-07-26 10:32:40.442784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.790 [2024-07-26 10:32:40.443053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.790 [2024-07-26 10:32:40.443070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:27.790 [2024-07-26 10:32:40.443117] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:27.790 [2024-07-26 10:32:40.443133] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:27.790 [2024-07-26 10:32:40.443251] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f5f650 00:22:27.790 [2024-07-26 10:32:40.443261] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:27.790 [2024-07-26 10:32:40.443411] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f63d10 00:22:27.790 [2024-07-26 10:32:40.443524] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f5f650 00:22:27.790 [2024-07-26 10:32:40.443533] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f5f650 00:22:27.790 [2024-07-26 10:32:40.443616] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.790 pt4 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.790 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.048 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.048 "name": "raid_bdev1", 00:22:28.048 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:28.048 "strip_size_kb": 64, 00:22:28.048 "state": "online", 00:22:28.048 "raid_level": "concat", 00:22:28.048 "superblock": true, 00:22:28.048 "num_base_bdevs": 4, 00:22:28.048 "num_base_bdevs_discovered": 4, 00:22:28.048 "num_base_bdevs_operational": 4, 00:22:28.048 "base_bdevs_list": [ 00:22:28.048 { 00:22:28.048 "name": "pt1", 00:22:28.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.048 "is_configured": true, 00:22:28.048 "data_offset": 2048, 00:22:28.048 "data_size": 63488 00:22:28.048 }, 00:22:28.048 { 00:22:28.048 "name": "pt2", 00:22:28.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.048 "is_configured": true, 00:22:28.048 "data_offset": 2048, 00:22:28.048 "data_size": 63488 00:22:28.048 }, 00:22:28.048 { 00:22:28.048 "name": "pt3", 00:22:28.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.048 "is_configured": true, 00:22:28.048 "data_offset": 2048, 00:22:28.048 "data_size": 63488 00:22:28.048 }, 00:22:28.048 { 00:22:28.048 "name": "pt4", 00:22:28.048 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:28.048 "is_configured": true, 00:22:28.048 "data_offset": 2048, 00:22:28.048 "data_size": 63488 00:22:28.048 } 00:22:28.048 ] 00:22:28.048 }' 00:22:28.048 10:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.048 10:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:28.614 [2024-07-26 10:32:41.469701] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:28.614 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:28.614 "name": "raid_bdev1", 00:22:28.614 "aliases": [ 00:22:28.614 "3f49f975-3cad-40ac-91ec-d620216ec1b3" 00:22:28.614 ], 00:22:28.614 "product_name": "Raid Volume", 00:22:28.614 "block_size": 512, 00:22:28.614 "num_blocks": 253952, 00:22:28.614 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:28.614 "assigned_rate_limits": { 00:22:28.614 "rw_ios_per_sec": 0, 00:22:28.614 "rw_mbytes_per_sec": 0, 00:22:28.614 "r_mbytes_per_sec": 0, 00:22:28.614 "w_mbytes_per_sec": 0 00:22:28.614 }, 00:22:28.614 "claimed": false, 00:22:28.614 "zoned": false, 00:22:28.614 "supported_io_types": { 00:22:28.614 "read": true, 00:22:28.614 "write": true, 00:22:28.614 "unmap": true, 00:22:28.614 "flush": true, 00:22:28.614 "reset": true, 00:22:28.614 "nvme_admin": false, 00:22:28.614 "nvme_io": false, 00:22:28.614 "nvme_io_md": false, 00:22:28.614 "write_zeroes": true, 00:22:28.614 "zcopy": false, 00:22:28.614 "get_zone_info": false, 00:22:28.614 "zone_management": false, 00:22:28.614 "zone_append": false, 00:22:28.614 "compare": false, 00:22:28.614 "compare_and_write": false, 00:22:28.614 "abort": false, 00:22:28.614 "seek_hole": false, 00:22:28.614 "seek_data": false, 00:22:28.614 "copy": false, 00:22:28.614 "nvme_iov_md": false 00:22:28.614 }, 00:22:28.614 "memory_domains": [ 00:22:28.614 { 00:22:28.614 "dma_device_id": "system", 00:22:28.614 "dma_device_type": 1 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.614 "dma_device_type": 2 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "system", 00:22:28.614 "dma_device_type": 1 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.614 "dma_device_type": 2 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "system", 00:22:28.614 "dma_device_type": 1 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.614 "dma_device_type": 2 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "system", 00:22:28.614 "dma_device_type": 1 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.614 "dma_device_type": 2 00:22:28.614 } 00:22:28.614 ], 00:22:28.614 "driver_specific": { 00:22:28.614 "raid": { 00:22:28.614 "uuid": "3f49f975-3cad-40ac-91ec-d620216ec1b3", 00:22:28.614 "strip_size_kb": 64, 00:22:28.614 "state": "online", 00:22:28.614 "raid_level": "concat", 00:22:28.614 "superblock": true, 00:22:28.614 "num_base_bdevs": 4, 00:22:28.614 "num_base_bdevs_discovered": 4, 00:22:28.614 "num_base_bdevs_operational": 4, 00:22:28.614 "base_bdevs_list": [ 00:22:28.614 { 00:22:28.614 "name": "pt1", 00:22:28.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.614 "is_configured": true, 00:22:28.614 "data_offset": 2048, 00:22:28.614 "data_size": 63488 00:22:28.614 }, 00:22:28.614 { 00:22:28.614 "name": "pt2", 00:22:28.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.615 "is_configured": true, 00:22:28.615 "data_offset": 2048, 00:22:28.615 "data_size": 63488 00:22:28.615 }, 00:22:28.615 { 00:22:28.615 "name": "pt3", 00:22:28.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:28.615 "is_configured": true, 00:22:28.615 "data_offset": 2048, 00:22:28.615 "data_size": 63488 00:22:28.615 }, 00:22:28.615 { 00:22:28.615 "name": "pt4", 00:22:28.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:28.615 "is_configured": true, 00:22:28.615 "data_offset": 2048, 00:22:28.615 "data_size": 63488 00:22:28.615 } 00:22:28.615 ] 00:22:28.615 } 00:22:28.615 } 00:22:28.615 }' 00:22:28.615 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:28.873 pt2 00:22:28.873 pt3 00:22:28.873 pt4' 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:28.873 "name": "pt1", 00:22:28.873 "aliases": [ 00:22:28.873 "00000000-0000-0000-0000-000000000001" 00:22:28.873 ], 00:22:28.873 "product_name": "passthru", 00:22:28.873 "block_size": 512, 00:22:28.873 "num_blocks": 65536, 00:22:28.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.873 "assigned_rate_limits": { 00:22:28.873 "rw_ios_per_sec": 0, 00:22:28.873 "rw_mbytes_per_sec": 0, 00:22:28.873 "r_mbytes_per_sec": 0, 00:22:28.873 "w_mbytes_per_sec": 0 00:22:28.873 }, 00:22:28.873 "claimed": true, 00:22:28.873 "claim_type": "exclusive_write", 00:22:28.873 "zoned": false, 00:22:28.873 "supported_io_types": { 00:22:28.873 "read": true, 00:22:28.873 "write": true, 00:22:28.873 "unmap": true, 00:22:28.873 "flush": true, 00:22:28.873 "reset": true, 00:22:28.873 "nvme_admin": false, 00:22:28.873 "nvme_io": false, 00:22:28.873 "nvme_io_md": false, 00:22:28.873 "write_zeroes": true, 00:22:28.873 "zcopy": true, 00:22:28.873 "get_zone_info": false, 00:22:28.873 "zone_management": false, 00:22:28.873 "zone_append": false, 00:22:28.873 "compare": false, 00:22:28.873 "compare_and_write": false, 00:22:28.873 "abort": true, 00:22:28.873 "seek_hole": false, 00:22:28.873 "seek_data": false, 00:22:28.873 "copy": true, 00:22:28.873 "nvme_iov_md": false 00:22:28.873 }, 00:22:28.873 "memory_domains": [ 00:22:28.873 { 00:22:28.873 "dma_device_id": "system", 00:22:28.873 "dma_device_type": 1 00:22:28.873 }, 00:22:28.873 { 00:22:28.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.873 "dma_device_type": 2 00:22:28.873 } 00:22:28.873 ], 00:22:28.873 "driver_specific": { 00:22:28.873 "passthru": { 00:22:28.873 "name": "pt1", 00:22:28.873 "base_bdev_name": "malloc1" 00:22:28.873 } 00:22:28.873 } 00:22:28.873 }' 00:22:28.873 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.131 10:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.131 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:29.131 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.389 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.389 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:29.389 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.389 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:29.389 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.647 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.647 "name": "pt2", 00:22:29.647 "aliases": [ 00:22:29.647 "00000000-0000-0000-0000-000000000002" 00:22:29.647 ], 00:22:29.647 "product_name": "passthru", 00:22:29.647 "block_size": 512, 00:22:29.647 "num_blocks": 65536, 00:22:29.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:29.647 "assigned_rate_limits": { 00:22:29.647 "rw_ios_per_sec": 0, 00:22:29.647 "rw_mbytes_per_sec": 0, 00:22:29.647 "r_mbytes_per_sec": 0, 00:22:29.647 "w_mbytes_per_sec": 0 00:22:29.647 }, 00:22:29.647 "claimed": true, 00:22:29.647 "claim_type": "exclusive_write", 00:22:29.647 "zoned": false, 00:22:29.647 "supported_io_types": { 00:22:29.647 "read": true, 00:22:29.647 "write": true, 00:22:29.647 "unmap": true, 00:22:29.647 "flush": true, 00:22:29.647 "reset": true, 00:22:29.647 "nvme_admin": false, 00:22:29.647 "nvme_io": false, 00:22:29.647 "nvme_io_md": false, 00:22:29.647 "write_zeroes": true, 00:22:29.647 "zcopy": true, 00:22:29.647 "get_zone_info": false, 00:22:29.647 "zone_management": false, 00:22:29.647 "zone_append": false, 00:22:29.647 "compare": false, 00:22:29.647 "compare_and_write": false, 00:22:29.647 "abort": true, 00:22:29.648 "seek_hole": false, 00:22:29.648 "seek_data": false, 00:22:29.648 "copy": true, 00:22:29.648 "nvme_iov_md": false 00:22:29.648 }, 00:22:29.648 "memory_domains": [ 00:22:29.648 { 00:22:29.648 "dma_device_id": "system", 00:22:29.648 "dma_device_type": 1 00:22:29.648 }, 00:22:29.648 { 00:22:29.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.648 "dma_device_type": 2 00:22:29.648 } 00:22:29.648 ], 00:22:29.648 "driver_specific": { 00:22:29.648 "passthru": { 00:22:29.648 "name": "pt2", 00:22:29.648 "base_bdev_name": "malloc2" 00:22:29.648 } 00:22:29.648 } 00:22:29.648 }' 00:22:29.648 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.648 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.648 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:29.648 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.648 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:29.906 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.164 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.164 "name": "pt3", 00:22:30.164 "aliases": [ 00:22:30.164 "00000000-0000-0000-0000-000000000003" 00:22:30.164 ], 00:22:30.164 "product_name": "passthru", 00:22:30.164 "block_size": 512, 00:22:30.164 "num_blocks": 65536, 00:22:30.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:30.165 "assigned_rate_limits": { 00:22:30.165 "rw_ios_per_sec": 0, 00:22:30.165 "rw_mbytes_per_sec": 0, 00:22:30.165 "r_mbytes_per_sec": 0, 00:22:30.165 "w_mbytes_per_sec": 0 00:22:30.165 }, 00:22:30.165 "claimed": true, 00:22:30.165 "claim_type": "exclusive_write", 00:22:30.165 "zoned": false, 00:22:30.165 "supported_io_types": { 00:22:30.165 "read": true, 00:22:30.165 "write": true, 00:22:30.165 "unmap": true, 00:22:30.165 "flush": true, 00:22:30.165 "reset": true, 00:22:30.165 "nvme_admin": false, 00:22:30.165 "nvme_io": false, 00:22:30.165 "nvme_io_md": false, 00:22:30.165 "write_zeroes": true, 00:22:30.165 "zcopy": true, 00:22:30.165 "get_zone_info": false, 00:22:30.165 "zone_management": false, 00:22:30.165 "zone_append": false, 00:22:30.165 "compare": false, 00:22:30.165 "compare_and_write": false, 00:22:30.165 "abort": true, 00:22:30.165 "seek_hole": false, 00:22:30.165 "seek_data": false, 00:22:30.165 "copy": true, 00:22:30.165 "nvme_iov_md": false 00:22:30.165 }, 00:22:30.165 "memory_domains": [ 00:22:30.165 { 00:22:30.165 "dma_device_id": "system", 00:22:30.165 "dma_device_type": 1 00:22:30.165 }, 00:22:30.165 { 00:22:30.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.165 "dma_device_type": 2 00:22:30.165 } 00:22:30.165 ], 00:22:30.165 "driver_specific": { 00:22:30.165 "passthru": { 00:22:30.165 "name": "pt3", 00:22:30.165 "base_bdev_name": "malloc3" 00:22:30.165 } 00:22:30.165 } 00:22:30.165 }' 00:22:30.165 10:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.165 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.423 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.679 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:30.679 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:30.679 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:30.679 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.936 "name": "pt4", 00:22:30.936 "aliases": [ 00:22:30.936 "00000000-0000-0000-0000-000000000004" 00:22:30.936 ], 00:22:30.936 "product_name": "passthru", 00:22:30.936 "block_size": 512, 00:22:30.936 "num_blocks": 65536, 00:22:30.936 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:30.936 "assigned_rate_limits": { 00:22:30.936 "rw_ios_per_sec": 0, 00:22:30.936 "rw_mbytes_per_sec": 0, 00:22:30.936 "r_mbytes_per_sec": 0, 00:22:30.936 "w_mbytes_per_sec": 0 00:22:30.936 }, 00:22:30.936 "claimed": true, 00:22:30.936 "claim_type": "exclusive_write", 00:22:30.936 "zoned": false, 00:22:30.936 "supported_io_types": { 00:22:30.936 "read": true, 00:22:30.936 "write": true, 00:22:30.936 "unmap": true, 00:22:30.936 "flush": true, 00:22:30.936 "reset": true, 00:22:30.936 "nvme_admin": false, 00:22:30.936 "nvme_io": false, 00:22:30.936 "nvme_io_md": false, 00:22:30.936 "write_zeroes": true, 00:22:30.936 "zcopy": true, 00:22:30.936 "get_zone_info": false, 00:22:30.936 "zone_management": false, 00:22:30.936 "zone_append": false, 00:22:30.936 "compare": false, 00:22:30.936 "compare_and_write": false, 00:22:30.936 "abort": true, 00:22:30.936 "seek_hole": false, 00:22:30.936 "seek_data": false, 00:22:30.936 "copy": true, 00:22:30.936 "nvme_iov_md": false 00:22:30.936 }, 00:22:30.936 "memory_domains": [ 00:22:30.936 { 00:22:30.936 "dma_device_id": "system", 00:22:30.936 "dma_device_type": 1 00:22:30.936 }, 00:22:30.936 { 00:22:30.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.936 "dma_device_type": 2 00:22:30.936 } 00:22:30.936 ], 00:22:30.936 "driver_specific": { 00:22:30.936 "passthru": { 00:22:30.936 "name": "pt4", 00:22:30.936 "base_bdev_name": "malloc4" 00:22:30.936 } 00:22:30.936 } 00:22:30.936 }' 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.936 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:31.194 10:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:31.452 [2024-07-26 10:32:44.152779] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 3f49f975-3cad-40ac-91ec-d620216ec1b3 '!=' 3f49f975-3cad-40ac-91ec-d620216ec1b3 ']' 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3447591 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3447591 ']' 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3447591 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3447591 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3447591' 00:22:31.452 killing process with pid 3447591 00:22:31.452 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3447591 00:22:31.452 [2024-07-26 10:32:44.233636] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:31.452 [2024-07-26 10:32:44.233691] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.453 [2024-07-26 10:32:44.233749] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.453 [2024-07-26 10:32:44.233760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5f650 name raid_bdev1, state offline 00:22:31.453 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3447591 00:22:31.453 [2024-07-26 10:32:44.265206] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:31.710 10:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:31.710 00:22:31.710 real 0m16.055s 00:22:31.710 user 0m29.049s 00:22:31.710 sys 0m2.831s 00:22:31.710 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.710 10:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.710 ************************************ 00:22:31.710 END TEST raid_superblock_test 00:22:31.711 ************************************ 00:22:31.711 10:32:44 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:22:31.711 10:32:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:31.711 10:32:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.711 10:32:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:31.711 ************************************ 00:22:31.711 START TEST raid_read_error_test 00:22:31.711 ************************************ 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.9F5dtkwHDd 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3450558 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3450558 /var/tmp/spdk-raid.sock 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3450558 ']' 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:31.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.711 10:32:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.711 [2024-07-26 10:32:44.593092] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:31.711 [2024-07-26 10:32:44.593148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450558 ] 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:31.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:31.969 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:31.969 [2024-07-26 10:32:44.717093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.969 [2024-07-26 10:32:44.761498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.969 [2024-07-26 10:32:44.818567] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.969 [2024-07-26 10:32:44.818598] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:32.904 10:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.904 10:32:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:32.904 10:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:32.904 10:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:32.904 BaseBdev1_malloc 00:22:32.904 10:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:33.162 true 00:22:33.162 10:32:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:33.420 [2024-07-26 10:32:46.173041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:33.420 [2024-07-26 10:32:46.173085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.420 [2024-07-26 10:32:46.173102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25da7c0 00:22:33.420 [2024-07-26 10:32:46.173113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.420 [2024-07-26 10:32:46.174591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.420 [2024-07-26 10:32:46.174616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:33.420 BaseBdev1 00:22:33.420 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:33.420 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:33.678 BaseBdev2_malloc 00:22:33.678 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:33.936 true 00:22:33.936 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:34.194 [2024-07-26 10:32:46.859013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:34.194 [2024-07-26 10:32:46.859057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.194 [2024-07-26 10:32:46.859075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2581960 00:22:34.194 [2024-07-26 10:32:46.859087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.194 [2024-07-26 10:32:46.860449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.194 [2024-07-26 10:32:46.860477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:34.194 BaseBdev2 00:22:34.194 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:34.194 10:32:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:34.194 BaseBdev3_malloc 00:22:34.453 10:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:34.453 true 00:22:34.453 10:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:34.712 [2024-07-26 10:32:47.548956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:34.712 [2024-07-26 10:32:47.548997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.712 [2024-07-26 10:32:47.549015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2584720 00:22:34.712 [2024-07-26 10:32:47.549030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.712 [2024-07-26 10:32:47.550324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.712 [2024-07-26 10:32:47.550351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:34.712 BaseBdev3 00:22:34.712 10:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:34.712 10:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:34.971 BaseBdev4_malloc 00:22:34.971 10:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:35.229 true 00:22:35.229 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:35.488 [2024-07-26 10:32:48.243002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:35.488 [2024-07-26 10:32:48.243040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.488 [2024-07-26 10:32:48.243057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25838b0 00:22:35.488 [2024-07-26 10:32:48.243069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.488 [2024-07-26 10:32:48.244346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.488 [2024-07-26 10:32:48.244371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:35.488 BaseBdev4 00:22:35.488 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:35.747 [2024-07-26 10:32:48.475643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:35.747 [2024-07-26 10:32:48.476708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:35.747 [2024-07-26 10:32:48.476768] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:35.747 [2024-07-26 10:32:48.476824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:35.747 [2024-07-26 10:32:48.477013] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2586080 00:22:35.747 [2024-07-26 10:32:48.477023] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:35.747 [2024-07-26 10:32:48.477198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x258b6a0 00:22:35.747 [2024-07-26 10:32:48.477324] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2586080 00:22:35.747 [2024-07-26 10:32:48.477333] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2586080 00:22:35.747 [2024-07-26 10:32:48.477431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.747 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.006 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.007 "name": "raid_bdev1", 00:22:36.007 "uuid": "40e16fad-72af-4a6d-9757-8005d1f8c5df", 00:22:36.007 "strip_size_kb": 64, 00:22:36.007 "state": "online", 00:22:36.007 "raid_level": "concat", 00:22:36.007 "superblock": true, 00:22:36.007 "num_base_bdevs": 4, 00:22:36.007 "num_base_bdevs_discovered": 4, 00:22:36.007 "num_base_bdevs_operational": 4, 00:22:36.007 "base_bdevs_list": [ 00:22:36.007 { 00:22:36.007 "name": "BaseBdev1", 00:22:36.007 "uuid": "b333a24b-74d3-5498-9103-729827219ca8", 00:22:36.007 "is_configured": true, 00:22:36.007 "data_offset": 2048, 00:22:36.007 "data_size": 63488 00:22:36.007 }, 00:22:36.007 { 00:22:36.007 "name": "BaseBdev2", 00:22:36.007 "uuid": "028c71cc-e666-57e9-bc75-44d85d189d3c", 00:22:36.007 "is_configured": true, 00:22:36.007 "data_offset": 2048, 00:22:36.007 "data_size": 63488 00:22:36.007 }, 00:22:36.007 { 00:22:36.007 "name": "BaseBdev3", 00:22:36.007 "uuid": "5b9c38e8-4cec-5e70-af03-b88f31dfecae", 00:22:36.007 "is_configured": true, 00:22:36.007 "data_offset": 2048, 00:22:36.007 "data_size": 63488 00:22:36.007 }, 00:22:36.007 { 00:22:36.007 "name": "BaseBdev4", 00:22:36.007 "uuid": "638cfe99-ba24-557f-9093-fc2ed6ef93c2", 00:22:36.007 "is_configured": true, 00:22:36.007 "data_offset": 2048, 00:22:36.007 "data_size": 63488 00:22:36.007 } 00:22:36.007 ] 00:22:36.007 }' 00:22:36.007 10:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.007 10:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.574 10:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:36.575 10:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:36.833 [2024-07-26 10:32:49.518650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2580d30 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.772 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.031 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.031 "name": "raid_bdev1", 00:22:38.031 "uuid": "40e16fad-72af-4a6d-9757-8005d1f8c5df", 00:22:38.031 "strip_size_kb": 64, 00:22:38.031 "state": "online", 00:22:38.031 "raid_level": "concat", 00:22:38.031 "superblock": true, 00:22:38.031 "num_base_bdevs": 4, 00:22:38.031 "num_base_bdevs_discovered": 4, 00:22:38.031 "num_base_bdevs_operational": 4, 00:22:38.031 "base_bdevs_list": [ 00:22:38.031 { 00:22:38.031 "name": "BaseBdev1", 00:22:38.031 "uuid": "b333a24b-74d3-5498-9103-729827219ca8", 00:22:38.031 "is_configured": true, 00:22:38.031 "data_offset": 2048, 00:22:38.031 "data_size": 63488 00:22:38.031 }, 00:22:38.031 { 00:22:38.031 "name": "BaseBdev2", 00:22:38.031 "uuid": "028c71cc-e666-57e9-bc75-44d85d189d3c", 00:22:38.031 "is_configured": true, 00:22:38.031 "data_offset": 2048, 00:22:38.031 "data_size": 63488 00:22:38.031 }, 00:22:38.031 { 00:22:38.031 "name": "BaseBdev3", 00:22:38.031 "uuid": "5b9c38e8-4cec-5e70-af03-b88f31dfecae", 00:22:38.031 "is_configured": true, 00:22:38.031 "data_offset": 2048, 00:22:38.031 "data_size": 63488 00:22:38.031 }, 00:22:38.031 { 00:22:38.031 "name": "BaseBdev4", 00:22:38.031 "uuid": "638cfe99-ba24-557f-9093-fc2ed6ef93c2", 00:22:38.031 "is_configured": true, 00:22:38.031 "data_offset": 2048, 00:22:38.031 "data_size": 63488 00:22:38.031 } 00:22:38.031 ] 00:22:38.031 }' 00:22:38.031 10:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.031 10:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.600 10:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:39.169 [2024-07-26 10:32:51.827397] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.169 [2024-07-26 10:32:51.827430] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.169 [2024-07-26 10:32:51.830347] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.169 [2024-07-26 10:32:51.830386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.169 [2024-07-26 10:32:51.830423] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.169 [2024-07-26 10:32:51.830434] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2586080 name raid_bdev1, state offline 00:22:39.169 0 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3450558 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3450558 ']' 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3450558 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3450558 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3450558' 00:22:39.169 killing process with pid 3450558 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3450558 00:22:39.169 [2024-07-26 10:32:51.925196] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.169 10:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3450558 00:22:39.169 [2024-07-26 10:32:51.951583] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.9F5dtkwHDd 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:39.431 00:22:39.431 real 0m7.625s 00:22:39.431 user 0m12.357s 00:22:39.431 sys 0m1.337s 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.431 10:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.431 ************************************ 00:22:39.431 END TEST raid_read_error_test 00:22:39.431 ************************************ 00:22:39.431 10:32:52 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:22:39.431 10:32:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:39.431 10:32:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.431 10:32:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.431 ************************************ 00:22:39.431 START TEST raid_write_error_test 00:22:39.431 ************************************ 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:39.431 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.PNH7GdJ89k 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3451979 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3451979 /var/tmp/spdk-raid.sock 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3451979 ']' 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.432 10:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.432 [2024-07-26 10:32:52.311627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:39.432 [2024-07-26 10:32:52.311688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451979 ] 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:39.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:39.692 [2024-07-26 10:32:52.447225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.692 [2024-07-26 10:32:52.489788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.692 [2024-07-26 10:32:52.555830] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.692 [2024-07-26 10:32:52.555872] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.628 10:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.628 10:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:40.628 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:40.628 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:40.628 BaseBdev1_malloc 00:22:40.628 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:40.886 true 00:22:40.886 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:41.145 [2024-07-26 10:32:53.868668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:41.145 [2024-07-26 10:32:53.868711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.145 [2024-07-26 10:32:53.868729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7a7c0 00:22:41.145 [2024-07-26 10:32:53.868740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.145 [2024-07-26 10:32:53.870200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.145 [2024-07-26 10:32:53.870225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.145 BaseBdev1 00:22:41.145 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:41.145 10:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:41.404 BaseBdev2_malloc 00:22:41.404 10:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:41.661 true 00:22:41.661 10:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:41.661 [2024-07-26 10:32:54.554564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:41.661 [2024-07-26 10:32:54.554601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.661 [2024-07-26 10:32:54.554620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c21960 00:22:41.661 [2024-07-26 10:32:54.554631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.661 [2024-07-26 10:32:54.555868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.661 [2024-07-26 10:32:54.555894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:41.661 BaseBdev2 00:22:41.919 10:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:41.919 10:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:41.919 BaseBdev3_malloc 00:22:41.919 10:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:42.177 true 00:22:42.177 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:42.436 [2024-07-26 10:32:55.240461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:42.436 [2024-07-26 10:32:55.240499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.436 [2024-07-26 10:32:55.240517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c24720 00:22:42.436 [2024-07-26 10:32:55.240528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.436 [2024-07-26 10:32:55.241753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.436 [2024-07-26 10:32:55.241779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:42.436 BaseBdev3 00:22:42.436 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:42.436 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:42.694 BaseBdev4_malloc 00:22:42.694 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:42.953 true 00:22:42.953 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:43.211 [2024-07-26 10:32:55.914190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:43.211 [2024-07-26 10:32:55.914226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.211 [2024-07-26 10:32:55.914244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c238b0 00:22:43.211 [2024-07-26 10:32:55.914255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.211 [2024-07-26 10:32:55.915503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.211 [2024-07-26 10:32:55.915528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:43.211 BaseBdev4 00:22:43.211 10:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:43.469 [2024-07-26 10:32:56.134798] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.469 [2024-07-26 10:32:56.135917] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.469 [2024-07-26 10:32:56.135976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.469 [2024-07-26 10:32:56.136034] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.469 [2024-07-26 10:32:56.136230] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c26080 00:22:43.469 [2024-07-26 10:32:56.136240] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:43.469 [2024-07-26 10:32:56.136413] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2b6a0 00:22:43.469 [2024-07-26 10:32:56.136539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c26080 00:22:43.469 [2024-07-26 10:32:56.136548] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c26080 00:22:43.469 [2024-07-26 10:32:56.136650] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.469 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.727 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.727 "name": "raid_bdev1", 00:22:43.727 "uuid": "32d31a7e-eae5-4f39-a34b-a977d811c72f", 00:22:43.727 "strip_size_kb": 64, 00:22:43.727 "state": "online", 00:22:43.727 "raid_level": "concat", 00:22:43.727 "superblock": true, 00:22:43.727 "num_base_bdevs": 4, 00:22:43.727 "num_base_bdevs_discovered": 4, 00:22:43.727 "num_base_bdevs_operational": 4, 00:22:43.727 "base_bdevs_list": [ 00:22:43.727 { 00:22:43.727 "name": "BaseBdev1", 00:22:43.727 "uuid": "88107b65-c520-54d8-a020-619f1830f570", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 2048, 00:22:43.727 "data_size": 63488 00:22:43.727 }, 00:22:43.727 { 00:22:43.727 "name": "BaseBdev2", 00:22:43.727 "uuid": "02429f7d-43a6-580b-8162-374fb4e18f3d", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 2048, 00:22:43.727 "data_size": 63488 00:22:43.727 }, 00:22:43.727 { 00:22:43.727 "name": "BaseBdev3", 00:22:43.727 "uuid": "c0b316fa-a83b-5585-bf47-f4468bcb89bd", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 2048, 00:22:43.727 "data_size": 63488 00:22:43.727 }, 00:22:43.727 { 00:22:43.727 "name": "BaseBdev4", 00:22:43.727 "uuid": "1b5ec2c2-a576-5e64-8bc6-99bd6c0d45b0", 00:22:43.727 "is_configured": true, 00:22:43.727 "data_offset": 2048, 00:22:43.727 "data_size": 63488 00:22:43.727 } 00:22:43.727 ] 00:22:43.727 }' 00:22:43.727 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.727 10:32:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.293 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:44.294 10:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:44.294 [2024-07-26 10:32:57.061495] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c20d30 00:22:45.228 10:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.487 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.746 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.746 "name": "raid_bdev1", 00:22:45.746 "uuid": "32d31a7e-eae5-4f39-a34b-a977d811c72f", 00:22:45.746 "strip_size_kb": 64, 00:22:45.746 "state": "online", 00:22:45.746 "raid_level": "concat", 00:22:45.746 "superblock": true, 00:22:45.746 "num_base_bdevs": 4, 00:22:45.746 "num_base_bdevs_discovered": 4, 00:22:45.746 "num_base_bdevs_operational": 4, 00:22:45.746 "base_bdevs_list": [ 00:22:45.746 { 00:22:45.746 "name": "BaseBdev1", 00:22:45.746 "uuid": "88107b65-c520-54d8-a020-619f1830f570", 00:22:45.746 "is_configured": true, 00:22:45.746 "data_offset": 2048, 00:22:45.746 "data_size": 63488 00:22:45.746 }, 00:22:45.746 { 00:22:45.746 "name": "BaseBdev2", 00:22:45.746 "uuid": "02429f7d-43a6-580b-8162-374fb4e18f3d", 00:22:45.746 "is_configured": true, 00:22:45.746 "data_offset": 2048, 00:22:45.746 "data_size": 63488 00:22:45.746 }, 00:22:45.746 { 00:22:45.746 "name": "BaseBdev3", 00:22:45.746 "uuid": "c0b316fa-a83b-5585-bf47-f4468bcb89bd", 00:22:45.746 "is_configured": true, 00:22:45.746 "data_offset": 2048, 00:22:45.746 "data_size": 63488 00:22:45.746 }, 00:22:45.746 { 00:22:45.746 "name": "BaseBdev4", 00:22:45.746 "uuid": "1b5ec2c2-a576-5e64-8bc6-99bd6c0d45b0", 00:22:45.746 "is_configured": true, 00:22:45.746 "data_offset": 2048, 00:22:45.746 "data_size": 63488 00:22:45.746 } 00:22:45.746 ] 00:22:45.746 }' 00:22:45.746 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.746 10:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.313 10:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:46.313 [2024-07-26 10:32:59.203986] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.313 [2024-07-26 10:32:59.204021] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.313 [2024-07-26 10:32:59.206945] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.313 [2024-07-26 10:32:59.206986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.313 [2024-07-26 10:32:59.207025] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.313 [2024-07-26 10:32:59.207035] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c26080 name raid_bdev1, state offline 00:22:46.313 0 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3451979 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3451979 ']' 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3451979 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3451979 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3451979' 00:22:46.571 killing process with pid 3451979 00:22:46.571 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3451979 00:22:46.572 [2024-07-26 10:32:59.282765] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:46.572 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3451979 00:22:46.572 [2024-07-26 10:32:59.308646] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.PNH7GdJ89k 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:22:46.830 00:22:46.830 real 0m7.270s 00:22:46.830 user 0m11.578s 00:22:46.830 sys 0m1.297s 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.830 10:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.830 ************************************ 00:22:46.830 END TEST raid_write_error_test 00:22:46.830 ************************************ 00:22:46.830 10:32:59 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:22:46.830 10:32:59 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:22:46.830 10:32:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:46.830 10:32:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.830 10:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.830 ************************************ 00:22:46.830 START TEST raid_state_function_test 00:22:46.830 ************************************ 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:46.830 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3453393 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3453393' 00:22:46.831 Process raid pid: 3453393 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3453393 /var/tmp/spdk-raid.sock 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3453393 ']' 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:46.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.831 10:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.831 [2024-07-26 10:32:59.646658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:22:46.831 [2024-07-26 10:32:59.646715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:46.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:46.831 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:47.090 [2024-07-26 10:32:59.779855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.090 [2024-07-26 10:32:59.824789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.090 [2024-07-26 10:32:59.883202] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.090 [2024-07-26 10:32:59.883237] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:48.024 [2024-07-26 10:33:00.767639] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.024 [2024-07-26 10:33:00.767682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.024 [2024-07-26 10:33:00.767692] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.024 [2024-07-26 10:33:00.767703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.024 [2024-07-26 10:33:00.767712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.024 [2024-07-26 10:33:00.767722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.024 [2024-07-26 10:33:00.767730] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:48.024 [2024-07-26 10:33:00.767746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.024 10:33:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.282 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.282 "name": "Existed_Raid", 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "strip_size_kb": 0, 00:22:48.282 "state": "configuring", 00:22:48.282 "raid_level": "raid1", 00:22:48.282 "superblock": false, 00:22:48.282 "num_base_bdevs": 4, 00:22:48.282 "num_base_bdevs_discovered": 0, 00:22:48.282 "num_base_bdevs_operational": 4, 00:22:48.282 "base_bdevs_list": [ 00:22:48.282 { 00:22:48.282 "name": "BaseBdev1", 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "is_configured": false, 00:22:48.282 "data_offset": 0, 00:22:48.282 "data_size": 0 00:22:48.282 }, 00:22:48.282 { 00:22:48.282 "name": "BaseBdev2", 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "is_configured": false, 00:22:48.282 "data_offset": 0, 00:22:48.282 "data_size": 0 00:22:48.282 }, 00:22:48.282 { 00:22:48.282 "name": "BaseBdev3", 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "is_configured": false, 00:22:48.282 "data_offset": 0, 00:22:48.282 "data_size": 0 00:22:48.282 }, 00:22:48.282 { 00:22:48.282 "name": "BaseBdev4", 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "is_configured": false, 00:22:48.282 "data_offset": 0, 00:22:48.282 "data_size": 0 00:22:48.282 } 00:22:48.282 ] 00:22:48.282 }' 00:22:48.282 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.282 10:33:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.846 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:48.847 [2024-07-26 10:33:01.730070] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.847 [2024-07-26 10:33:01.730100] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a0d00 name Existed_Raid, state configuring 00:22:49.105 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:49.105 [2024-07-26 10:33:01.890504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:49.105 [2024-07-26 10:33:01.890533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:49.105 [2024-07-26 10:33:01.890542] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:49.105 [2024-07-26 10:33:01.890553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:49.105 [2024-07-26 10:33:01.890561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:49.105 [2024-07-26 10:33:01.890571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:49.105 [2024-07-26 10:33:01.890578] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:49.105 [2024-07-26 10:33:01.890588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:49.105 10:33:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:49.363 [2024-07-26 10:33:02.064370] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.363 BaseBdev1 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:49.363 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:49.621 [ 00:22:49.621 { 00:22:49.621 "name": "BaseBdev1", 00:22:49.621 "aliases": [ 00:22:49.621 "080d6bc0-8ec5-47b3-a749-11aad641b7e4" 00:22:49.621 ], 00:22:49.621 "product_name": "Malloc disk", 00:22:49.621 "block_size": 512, 00:22:49.621 "num_blocks": 65536, 00:22:49.621 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:49.621 "assigned_rate_limits": { 00:22:49.621 "rw_ios_per_sec": 0, 00:22:49.621 "rw_mbytes_per_sec": 0, 00:22:49.621 "r_mbytes_per_sec": 0, 00:22:49.621 "w_mbytes_per_sec": 0 00:22:49.621 }, 00:22:49.621 "claimed": true, 00:22:49.621 "claim_type": "exclusive_write", 00:22:49.621 "zoned": false, 00:22:49.621 "supported_io_types": { 00:22:49.621 "read": true, 00:22:49.621 "write": true, 00:22:49.621 "unmap": true, 00:22:49.621 "flush": true, 00:22:49.621 "reset": true, 00:22:49.621 "nvme_admin": false, 00:22:49.621 "nvme_io": false, 00:22:49.621 "nvme_io_md": false, 00:22:49.621 "write_zeroes": true, 00:22:49.621 "zcopy": true, 00:22:49.621 "get_zone_info": false, 00:22:49.621 "zone_management": false, 00:22:49.621 "zone_append": false, 00:22:49.621 "compare": false, 00:22:49.621 "compare_and_write": false, 00:22:49.621 "abort": true, 00:22:49.621 "seek_hole": false, 00:22:49.621 "seek_data": false, 00:22:49.621 "copy": true, 00:22:49.621 "nvme_iov_md": false 00:22:49.621 }, 00:22:49.621 "memory_domains": [ 00:22:49.621 { 00:22:49.621 "dma_device_id": "system", 00:22:49.621 "dma_device_type": 1 00:22:49.621 }, 00:22:49.621 { 00:22:49.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.621 "dma_device_type": 2 00:22:49.621 } 00:22:49.621 ], 00:22:49.621 "driver_specific": {} 00:22:49.621 } 00:22:49.621 ] 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.621 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.880 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.880 "name": "Existed_Raid", 00:22:49.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.880 "strip_size_kb": 0, 00:22:49.880 "state": "configuring", 00:22:49.880 "raid_level": "raid1", 00:22:49.880 "superblock": false, 00:22:49.880 "num_base_bdevs": 4, 00:22:49.880 "num_base_bdevs_discovered": 1, 00:22:49.880 "num_base_bdevs_operational": 4, 00:22:49.880 "base_bdevs_list": [ 00:22:49.880 { 00:22:49.880 "name": "BaseBdev1", 00:22:49.880 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:49.880 "is_configured": true, 00:22:49.880 "data_offset": 0, 00:22:49.880 "data_size": 65536 00:22:49.880 }, 00:22:49.880 { 00:22:49.880 "name": "BaseBdev2", 00:22:49.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.880 "is_configured": false, 00:22:49.880 "data_offset": 0, 00:22:49.880 "data_size": 0 00:22:49.880 }, 00:22:49.880 { 00:22:49.880 "name": "BaseBdev3", 00:22:49.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.880 "is_configured": false, 00:22:49.880 "data_offset": 0, 00:22:49.880 "data_size": 0 00:22:49.880 }, 00:22:49.880 { 00:22:49.880 "name": "BaseBdev4", 00:22:49.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.880 "is_configured": false, 00:22:49.880 "data_offset": 0, 00:22:49.880 "data_size": 0 00:22:49.880 } 00:22:49.880 ] 00:22:49.880 }' 00:22:49.880 10:33:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.880 10:33:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.446 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:50.702 [2024-07-26 10:33:03.435902] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:50.702 [2024-07-26 10:33:03.435936] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a0630 name Existed_Raid, state configuring 00:22:50.702 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:50.959 [2024-07-26 10:33:03.608391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:50.959 [2024-07-26 10:33:03.609716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:50.959 [2024-07-26 10:33:03.609747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:50.959 [2024-07-26 10:33:03.609757] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:50.959 [2024-07-26 10:33:03.609767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:50.959 [2024-07-26 10:33:03.609775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:50.959 [2024-07-26 10:33:03.609785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.959 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.216 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.216 "name": "Existed_Raid", 00:22:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.216 "strip_size_kb": 0, 00:22:51.216 "state": "configuring", 00:22:51.216 "raid_level": "raid1", 00:22:51.216 "superblock": false, 00:22:51.216 "num_base_bdevs": 4, 00:22:51.216 "num_base_bdevs_discovered": 1, 00:22:51.216 "num_base_bdevs_operational": 4, 00:22:51.216 "base_bdevs_list": [ 00:22:51.216 { 00:22:51.216 "name": "BaseBdev1", 00:22:51.216 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:51.216 "is_configured": true, 00:22:51.216 "data_offset": 0, 00:22:51.216 "data_size": 65536 00:22:51.216 }, 00:22:51.216 { 00:22:51.216 "name": "BaseBdev2", 00:22:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.216 "is_configured": false, 00:22:51.216 "data_offset": 0, 00:22:51.216 "data_size": 0 00:22:51.216 }, 00:22:51.216 { 00:22:51.216 "name": "BaseBdev3", 00:22:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.216 "is_configured": false, 00:22:51.216 "data_offset": 0, 00:22:51.216 "data_size": 0 00:22:51.216 }, 00:22:51.216 { 00:22:51.216 "name": "BaseBdev4", 00:22:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.216 "is_configured": false, 00:22:51.216 "data_offset": 0, 00:22:51.216 "data_size": 0 00:22:51.216 } 00:22:51.216 ] 00:22:51.216 }' 00:22:51.216 10:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.216 10:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:51.820 [2024-07-26 10:33:04.662248] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:51.820 BaseBdev2 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:51.820 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.078 10:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:52.336 [ 00:22:52.336 { 00:22:52.336 "name": "BaseBdev2", 00:22:52.336 "aliases": [ 00:22:52.336 "637e8b76-9177-4345-b529-9f4fc7632804" 00:22:52.336 ], 00:22:52.336 "product_name": "Malloc disk", 00:22:52.336 "block_size": 512, 00:22:52.336 "num_blocks": 65536, 00:22:52.336 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:52.336 "assigned_rate_limits": { 00:22:52.336 "rw_ios_per_sec": 0, 00:22:52.336 "rw_mbytes_per_sec": 0, 00:22:52.336 "r_mbytes_per_sec": 0, 00:22:52.336 "w_mbytes_per_sec": 0 00:22:52.336 }, 00:22:52.336 "claimed": true, 00:22:52.336 "claim_type": "exclusive_write", 00:22:52.336 "zoned": false, 00:22:52.336 "supported_io_types": { 00:22:52.336 "read": true, 00:22:52.336 "write": true, 00:22:52.336 "unmap": true, 00:22:52.336 "flush": true, 00:22:52.336 "reset": true, 00:22:52.336 "nvme_admin": false, 00:22:52.336 "nvme_io": false, 00:22:52.336 "nvme_io_md": false, 00:22:52.336 "write_zeroes": true, 00:22:52.336 "zcopy": true, 00:22:52.336 "get_zone_info": false, 00:22:52.336 "zone_management": false, 00:22:52.336 "zone_append": false, 00:22:52.336 "compare": false, 00:22:52.336 "compare_and_write": false, 00:22:52.336 "abort": true, 00:22:52.336 "seek_hole": false, 00:22:52.336 "seek_data": false, 00:22:52.336 "copy": true, 00:22:52.336 "nvme_iov_md": false 00:22:52.336 }, 00:22:52.336 "memory_domains": [ 00:22:52.336 { 00:22:52.336 "dma_device_id": "system", 00:22:52.336 "dma_device_type": 1 00:22:52.336 }, 00:22:52.336 { 00:22:52.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.336 "dma_device_type": 2 00:22:52.336 } 00:22:52.336 ], 00:22:52.336 "driver_specific": {} 00:22:52.336 } 00:22:52.336 ] 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.336 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.594 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.594 "name": "Existed_Raid", 00:22:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.594 "strip_size_kb": 0, 00:22:52.594 "state": "configuring", 00:22:52.594 "raid_level": "raid1", 00:22:52.594 "superblock": false, 00:22:52.594 "num_base_bdevs": 4, 00:22:52.594 "num_base_bdevs_discovered": 2, 00:22:52.594 "num_base_bdevs_operational": 4, 00:22:52.594 "base_bdevs_list": [ 00:22:52.594 { 00:22:52.594 "name": "BaseBdev1", 00:22:52.594 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:52.594 "is_configured": true, 00:22:52.594 "data_offset": 0, 00:22:52.594 "data_size": 65536 00:22:52.594 }, 00:22:52.594 { 00:22:52.594 "name": "BaseBdev2", 00:22:52.594 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:52.594 "is_configured": true, 00:22:52.594 "data_offset": 0, 00:22:52.594 "data_size": 65536 00:22:52.594 }, 00:22:52.594 { 00:22:52.594 "name": "BaseBdev3", 00:22:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.594 "is_configured": false, 00:22:52.594 "data_offset": 0, 00:22:52.594 "data_size": 0 00:22:52.594 }, 00:22:52.594 { 00:22:52.594 "name": "BaseBdev4", 00:22:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.594 "is_configured": false, 00:22:52.594 "data_offset": 0, 00:22:52.594 "data_size": 0 00:22:52.594 } 00:22:52.594 ] 00:22:52.594 }' 00:22:52.594 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.594 10:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.159 10:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:53.417 [2024-07-26 10:33:06.109375] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:53.417 BaseBdev3 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:53.417 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:53.675 [ 00:22:53.675 { 00:22:53.675 "name": "BaseBdev3", 00:22:53.675 "aliases": [ 00:22:53.675 "ad1752a9-7f85-4190-84a7-cef11e31ca0e" 00:22:53.675 ], 00:22:53.675 "product_name": "Malloc disk", 00:22:53.675 "block_size": 512, 00:22:53.675 "num_blocks": 65536, 00:22:53.675 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:53.675 "assigned_rate_limits": { 00:22:53.675 "rw_ios_per_sec": 0, 00:22:53.675 "rw_mbytes_per_sec": 0, 00:22:53.675 "r_mbytes_per_sec": 0, 00:22:53.675 "w_mbytes_per_sec": 0 00:22:53.675 }, 00:22:53.675 "claimed": true, 00:22:53.675 "claim_type": "exclusive_write", 00:22:53.675 "zoned": false, 00:22:53.675 "supported_io_types": { 00:22:53.675 "read": true, 00:22:53.675 "write": true, 00:22:53.675 "unmap": true, 00:22:53.675 "flush": true, 00:22:53.675 "reset": true, 00:22:53.675 "nvme_admin": false, 00:22:53.675 "nvme_io": false, 00:22:53.675 "nvme_io_md": false, 00:22:53.675 "write_zeroes": true, 00:22:53.675 "zcopy": true, 00:22:53.675 "get_zone_info": false, 00:22:53.675 "zone_management": false, 00:22:53.675 "zone_append": false, 00:22:53.675 "compare": false, 00:22:53.675 "compare_and_write": false, 00:22:53.675 "abort": true, 00:22:53.675 "seek_hole": false, 00:22:53.675 "seek_data": false, 00:22:53.675 "copy": true, 00:22:53.675 "nvme_iov_md": false 00:22:53.675 }, 00:22:53.675 "memory_domains": [ 00:22:53.675 { 00:22:53.675 "dma_device_id": "system", 00:22:53.675 "dma_device_type": 1 00:22:53.675 }, 00:22:53.675 { 00:22:53.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.675 "dma_device_type": 2 00:22:53.675 } 00:22:53.675 ], 00:22:53.675 "driver_specific": {} 00:22:53.675 } 00:22:53.675 ] 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.675 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.933 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.933 "name": "Existed_Raid", 00:22:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.933 "strip_size_kb": 0, 00:22:53.933 "state": "configuring", 00:22:53.933 "raid_level": "raid1", 00:22:53.933 "superblock": false, 00:22:53.933 "num_base_bdevs": 4, 00:22:53.933 "num_base_bdevs_discovered": 3, 00:22:53.933 "num_base_bdevs_operational": 4, 00:22:53.933 "base_bdevs_list": [ 00:22:53.933 { 00:22:53.933 "name": "BaseBdev1", 00:22:53.933 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:53.933 "is_configured": true, 00:22:53.933 "data_offset": 0, 00:22:53.933 "data_size": 65536 00:22:53.933 }, 00:22:53.933 { 00:22:53.933 "name": "BaseBdev2", 00:22:53.933 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:53.933 "is_configured": true, 00:22:53.933 "data_offset": 0, 00:22:53.933 "data_size": 65536 00:22:53.933 }, 00:22:53.933 { 00:22:53.933 "name": "BaseBdev3", 00:22:53.933 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:53.933 "is_configured": true, 00:22:53.933 "data_offset": 0, 00:22:53.933 "data_size": 65536 00:22:53.933 }, 00:22:53.933 { 00:22:53.933 "name": "BaseBdev4", 00:22:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.933 "is_configured": false, 00:22:53.933 "data_offset": 0, 00:22:53.933 "data_size": 0 00:22:53.933 } 00:22:53.933 ] 00:22:53.933 }' 00:22:53.933 10:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.933 10:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.499 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:54.757 [2024-07-26 10:33:07.596402] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:54.757 [2024-07-26 10:33:07.596435] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb534f0 00:22:54.757 [2024-07-26 10:33:07.596442] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:54.757 [2024-07-26 10:33:07.596670] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a4090 00:22:54.757 [2024-07-26 10:33:07.596782] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb534f0 00:22:54.757 [2024-07-26 10:33:07.596795] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb534f0 00:22:54.757 [2024-07-26 10:33:07.596937] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.757 BaseBdev4 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:54.757 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.015 10:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:55.273 [ 00:22:55.273 { 00:22:55.273 "name": "BaseBdev4", 00:22:55.273 "aliases": [ 00:22:55.273 "afd782ab-5ac2-477d-87b6-7a99bdff9082" 00:22:55.273 ], 00:22:55.273 "product_name": "Malloc disk", 00:22:55.273 "block_size": 512, 00:22:55.273 "num_blocks": 65536, 00:22:55.273 "uuid": "afd782ab-5ac2-477d-87b6-7a99bdff9082", 00:22:55.273 "assigned_rate_limits": { 00:22:55.273 "rw_ios_per_sec": 0, 00:22:55.273 "rw_mbytes_per_sec": 0, 00:22:55.273 "r_mbytes_per_sec": 0, 00:22:55.273 "w_mbytes_per_sec": 0 00:22:55.273 }, 00:22:55.273 "claimed": true, 00:22:55.273 "claim_type": "exclusive_write", 00:22:55.273 "zoned": false, 00:22:55.273 "supported_io_types": { 00:22:55.273 "read": true, 00:22:55.273 "write": true, 00:22:55.273 "unmap": true, 00:22:55.273 "flush": true, 00:22:55.273 "reset": true, 00:22:55.273 "nvme_admin": false, 00:22:55.273 "nvme_io": false, 00:22:55.273 "nvme_io_md": false, 00:22:55.273 "write_zeroes": true, 00:22:55.273 "zcopy": true, 00:22:55.273 "get_zone_info": false, 00:22:55.273 "zone_management": false, 00:22:55.273 "zone_append": false, 00:22:55.273 "compare": false, 00:22:55.273 "compare_and_write": false, 00:22:55.273 "abort": true, 00:22:55.273 "seek_hole": false, 00:22:55.273 "seek_data": false, 00:22:55.273 "copy": true, 00:22:55.273 "nvme_iov_md": false 00:22:55.273 }, 00:22:55.273 "memory_domains": [ 00:22:55.273 { 00:22:55.273 "dma_device_id": "system", 00:22:55.273 "dma_device_type": 1 00:22:55.273 }, 00:22:55.273 { 00:22:55.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.273 "dma_device_type": 2 00:22:55.273 } 00:22:55.273 ], 00:22:55.273 "driver_specific": {} 00:22:55.273 } 00:22:55.273 ] 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.273 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.274 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.532 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.532 "name": "Existed_Raid", 00:22:55.532 "uuid": "e039d8f4-0d0f-4ffb-850c-0561ef12456a", 00:22:55.532 "strip_size_kb": 0, 00:22:55.532 "state": "online", 00:22:55.532 "raid_level": "raid1", 00:22:55.532 "superblock": false, 00:22:55.532 "num_base_bdevs": 4, 00:22:55.532 "num_base_bdevs_discovered": 4, 00:22:55.532 "num_base_bdevs_operational": 4, 00:22:55.532 "base_bdevs_list": [ 00:22:55.532 { 00:22:55.532 "name": "BaseBdev1", 00:22:55.532 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:55.532 "is_configured": true, 00:22:55.532 "data_offset": 0, 00:22:55.532 "data_size": 65536 00:22:55.532 }, 00:22:55.532 { 00:22:55.532 "name": "BaseBdev2", 00:22:55.532 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:55.532 "is_configured": true, 00:22:55.532 "data_offset": 0, 00:22:55.532 "data_size": 65536 00:22:55.532 }, 00:22:55.532 { 00:22:55.532 "name": "BaseBdev3", 00:22:55.532 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:55.532 "is_configured": true, 00:22:55.532 "data_offset": 0, 00:22:55.532 "data_size": 65536 00:22:55.532 }, 00:22:55.532 { 00:22:55.532 "name": "BaseBdev4", 00:22:55.532 "uuid": "afd782ab-5ac2-477d-87b6-7a99bdff9082", 00:22:55.532 "is_configured": true, 00:22:55.532 "data_offset": 0, 00:22:55.532 "data_size": 65536 00:22:55.532 } 00:22:55.532 ] 00:22:55.532 }' 00:22:55.532 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.532 10:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:56.099 10:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:56.358 [2024-07-26 10:33:09.088662] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.358 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:56.358 "name": "Existed_Raid", 00:22:56.358 "aliases": [ 00:22:56.358 "e039d8f4-0d0f-4ffb-850c-0561ef12456a" 00:22:56.358 ], 00:22:56.358 "product_name": "Raid Volume", 00:22:56.358 "block_size": 512, 00:22:56.358 "num_blocks": 65536, 00:22:56.358 "uuid": "e039d8f4-0d0f-4ffb-850c-0561ef12456a", 00:22:56.358 "assigned_rate_limits": { 00:22:56.358 "rw_ios_per_sec": 0, 00:22:56.358 "rw_mbytes_per_sec": 0, 00:22:56.358 "r_mbytes_per_sec": 0, 00:22:56.358 "w_mbytes_per_sec": 0 00:22:56.358 }, 00:22:56.358 "claimed": false, 00:22:56.358 "zoned": false, 00:22:56.358 "supported_io_types": { 00:22:56.358 "read": true, 00:22:56.358 "write": true, 00:22:56.358 "unmap": false, 00:22:56.358 "flush": false, 00:22:56.358 "reset": true, 00:22:56.358 "nvme_admin": false, 00:22:56.358 "nvme_io": false, 00:22:56.358 "nvme_io_md": false, 00:22:56.358 "write_zeroes": true, 00:22:56.358 "zcopy": false, 00:22:56.358 "get_zone_info": false, 00:22:56.358 "zone_management": false, 00:22:56.358 "zone_append": false, 00:22:56.358 "compare": false, 00:22:56.358 "compare_and_write": false, 00:22:56.358 "abort": false, 00:22:56.358 "seek_hole": false, 00:22:56.358 "seek_data": false, 00:22:56.358 "copy": false, 00:22:56.358 "nvme_iov_md": false 00:22:56.358 }, 00:22:56.358 "memory_domains": [ 00:22:56.358 { 00:22:56.358 "dma_device_id": "system", 00:22:56.358 "dma_device_type": 1 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.358 "dma_device_type": 2 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "system", 00:22:56.358 "dma_device_type": 1 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.358 "dma_device_type": 2 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "system", 00:22:56.358 "dma_device_type": 1 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.358 "dma_device_type": 2 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "system", 00:22:56.358 "dma_device_type": 1 00:22:56.358 }, 00:22:56.358 { 00:22:56.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.358 "dma_device_type": 2 00:22:56.358 } 00:22:56.358 ], 00:22:56.358 "driver_specific": { 00:22:56.358 "raid": { 00:22:56.358 "uuid": "e039d8f4-0d0f-4ffb-850c-0561ef12456a", 00:22:56.358 "strip_size_kb": 0, 00:22:56.358 "state": "online", 00:22:56.359 "raid_level": "raid1", 00:22:56.359 "superblock": false, 00:22:56.359 "num_base_bdevs": 4, 00:22:56.359 "num_base_bdevs_discovered": 4, 00:22:56.359 "num_base_bdevs_operational": 4, 00:22:56.359 "base_bdevs_list": [ 00:22:56.359 { 00:22:56.359 "name": "BaseBdev1", 00:22:56.359 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:56.359 "is_configured": true, 00:22:56.359 "data_offset": 0, 00:22:56.359 "data_size": 65536 00:22:56.359 }, 00:22:56.359 { 00:22:56.359 "name": "BaseBdev2", 00:22:56.359 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:56.359 "is_configured": true, 00:22:56.359 "data_offset": 0, 00:22:56.359 "data_size": 65536 00:22:56.359 }, 00:22:56.359 { 00:22:56.359 "name": "BaseBdev3", 00:22:56.359 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:56.359 "is_configured": true, 00:22:56.359 "data_offset": 0, 00:22:56.359 "data_size": 65536 00:22:56.359 }, 00:22:56.359 { 00:22:56.359 "name": "BaseBdev4", 00:22:56.359 "uuid": "afd782ab-5ac2-477d-87b6-7a99bdff9082", 00:22:56.359 "is_configured": true, 00:22:56.359 "data_offset": 0, 00:22:56.359 "data_size": 65536 00:22:56.359 } 00:22:56.359 ] 00:22:56.359 } 00:22:56.359 } 00:22:56.359 }' 00:22:56.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:56.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:56.359 BaseBdev2 00:22:56.359 BaseBdev3 00:22:56.359 BaseBdev4' 00:22:56.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:56.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:56.359 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:56.617 "name": "BaseBdev1", 00:22:56.617 "aliases": [ 00:22:56.617 "080d6bc0-8ec5-47b3-a749-11aad641b7e4" 00:22:56.617 ], 00:22:56.617 "product_name": "Malloc disk", 00:22:56.617 "block_size": 512, 00:22:56.617 "num_blocks": 65536, 00:22:56.617 "uuid": "080d6bc0-8ec5-47b3-a749-11aad641b7e4", 00:22:56.617 "assigned_rate_limits": { 00:22:56.617 "rw_ios_per_sec": 0, 00:22:56.617 "rw_mbytes_per_sec": 0, 00:22:56.617 "r_mbytes_per_sec": 0, 00:22:56.617 "w_mbytes_per_sec": 0 00:22:56.617 }, 00:22:56.617 "claimed": true, 00:22:56.617 "claim_type": "exclusive_write", 00:22:56.617 "zoned": false, 00:22:56.617 "supported_io_types": { 00:22:56.617 "read": true, 00:22:56.617 "write": true, 00:22:56.617 "unmap": true, 00:22:56.617 "flush": true, 00:22:56.617 "reset": true, 00:22:56.617 "nvme_admin": false, 00:22:56.617 "nvme_io": false, 00:22:56.617 "nvme_io_md": false, 00:22:56.617 "write_zeroes": true, 00:22:56.617 "zcopy": true, 00:22:56.617 "get_zone_info": false, 00:22:56.617 "zone_management": false, 00:22:56.617 "zone_append": false, 00:22:56.617 "compare": false, 00:22:56.617 "compare_and_write": false, 00:22:56.617 "abort": true, 00:22:56.617 "seek_hole": false, 00:22:56.617 "seek_data": false, 00:22:56.617 "copy": true, 00:22:56.617 "nvme_iov_md": false 00:22:56.617 }, 00:22:56.617 "memory_domains": [ 00:22:56.617 { 00:22:56.617 "dma_device_id": "system", 00:22:56.617 "dma_device_type": 1 00:22:56.617 }, 00:22:56.617 { 00:22:56.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.617 "dma_device_type": 2 00:22:56.617 } 00:22:56.617 ], 00:22:56.617 "driver_specific": {} 00:22:56.617 }' 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.617 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:56.875 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.133 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.133 "name": "BaseBdev2", 00:22:57.133 "aliases": [ 00:22:57.133 "637e8b76-9177-4345-b529-9f4fc7632804" 00:22:57.133 ], 00:22:57.133 "product_name": "Malloc disk", 00:22:57.133 "block_size": 512, 00:22:57.133 "num_blocks": 65536, 00:22:57.133 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:57.133 "assigned_rate_limits": { 00:22:57.133 "rw_ios_per_sec": 0, 00:22:57.133 "rw_mbytes_per_sec": 0, 00:22:57.133 "r_mbytes_per_sec": 0, 00:22:57.133 "w_mbytes_per_sec": 0 00:22:57.133 }, 00:22:57.133 "claimed": true, 00:22:57.133 "claim_type": "exclusive_write", 00:22:57.133 "zoned": false, 00:22:57.133 "supported_io_types": { 00:22:57.133 "read": true, 00:22:57.133 "write": true, 00:22:57.133 "unmap": true, 00:22:57.133 "flush": true, 00:22:57.133 "reset": true, 00:22:57.133 "nvme_admin": false, 00:22:57.133 "nvme_io": false, 00:22:57.133 "nvme_io_md": false, 00:22:57.133 "write_zeroes": true, 00:22:57.133 "zcopy": true, 00:22:57.133 "get_zone_info": false, 00:22:57.133 "zone_management": false, 00:22:57.133 "zone_append": false, 00:22:57.133 "compare": false, 00:22:57.133 "compare_and_write": false, 00:22:57.133 "abort": true, 00:22:57.133 "seek_hole": false, 00:22:57.133 "seek_data": false, 00:22:57.133 "copy": true, 00:22:57.133 "nvme_iov_md": false 00:22:57.133 }, 00:22:57.133 "memory_domains": [ 00:22:57.133 { 00:22:57.133 "dma_device_id": "system", 00:22:57.133 "dma_device_type": 1 00:22:57.133 }, 00:22:57.133 { 00:22:57.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.133 "dma_device_type": 2 00:22:57.133 } 00:22:57.133 ], 00:22:57.133 "driver_specific": {} 00:22:57.133 }' 00:22:57.133 10:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.133 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.391 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.650 "name": "BaseBdev3", 00:22:57.650 "aliases": [ 00:22:57.650 "ad1752a9-7f85-4190-84a7-cef11e31ca0e" 00:22:57.650 ], 00:22:57.650 "product_name": "Malloc disk", 00:22:57.650 "block_size": 512, 00:22:57.650 "num_blocks": 65536, 00:22:57.650 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:57.650 "assigned_rate_limits": { 00:22:57.650 "rw_ios_per_sec": 0, 00:22:57.650 "rw_mbytes_per_sec": 0, 00:22:57.650 "r_mbytes_per_sec": 0, 00:22:57.650 "w_mbytes_per_sec": 0 00:22:57.650 }, 00:22:57.650 "claimed": true, 00:22:57.650 "claim_type": "exclusive_write", 00:22:57.650 "zoned": false, 00:22:57.650 "supported_io_types": { 00:22:57.650 "read": true, 00:22:57.650 "write": true, 00:22:57.650 "unmap": true, 00:22:57.650 "flush": true, 00:22:57.650 "reset": true, 00:22:57.650 "nvme_admin": false, 00:22:57.650 "nvme_io": false, 00:22:57.650 "nvme_io_md": false, 00:22:57.650 "write_zeroes": true, 00:22:57.650 "zcopy": true, 00:22:57.650 "get_zone_info": false, 00:22:57.650 "zone_management": false, 00:22:57.650 "zone_append": false, 00:22:57.650 "compare": false, 00:22:57.650 "compare_and_write": false, 00:22:57.650 "abort": true, 00:22:57.650 "seek_hole": false, 00:22:57.650 "seek_data": false, 00:22:57.650 "copy": true, 00:22:57.650 "nvme_iov_md": false 00:22:57.650 }, 00:22:57.650 "memory_domains": [ 00:22:57.650 { 00:22:57.650 "dma_device_id": "system", 00:22:57.650 "dma_device_type": 1 00:22:57.650 }, 00:22:57.650 { 00:22:57.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.650 "dma_device_type": 2 00:22:57.650 } 00:22:57.650 ], 00:22:57.650 "driver_specific": {} 00:22:57.650 }' 00:22:57.650 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.909 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.167 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.167 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.167 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.167 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:58.167 10:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.425 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.425 "name": "BaseBdev4", 00:22:58.425 "aliases": [ 00:22:58.425 "afd782ab-5ac2-477d-87b6-7a99bdff9082" 00:22:58.425 ], 00:22:58.425 "product_name": "Malloc disk", 00:22:58.425 "block_size": 512, 00:22:58.425 "num_blocks": 65536, 00:22:58.425 "uuid": "afd782ab-5ac2-477d-87b6-7a99bdff9082", 00:22:58.425 "assigned_rate_limits": { 00:22:58.425 "rw_ios_per_sec": 0, 00:22:58.425 "rw_mbytes_per_sec": 0, 00:22:58.425 "r_mbytes_per_sec": 0, 00:22:58.425 "w_mbytes_per_sec": 0 00:22:58.425 }, 00:22:58.425 "claimed": true, 00:22:58.425 "claim_type": "exclusive_write", 00:22:58.425 "zoned": false, 00:22:58.425 "supported_io_types": { 00:22:58.425 "read": true, 00:22:58.425 "write": true, 00:22:58.425 "unmap": true, 00:22:58.425 "flush": true, 00:22:58.425 "reset": true, 00:22:58.425 "nvme_admin": false, 00:22:58.425 "nvme_io": false, 00:22:58.425 "nvme_io_md": false, 00:22:58.425 "write_zeroes": true, 00:22:58.425 "zcopy": true, 00:22:58.425 "get_zone_info": false, 00:22:58.426 "zone_management": false, 00:22:58.426 "zone_append": false, 00:22:58.426 "compare": false, 00:22:58.426 "compare_and_write": false, 00:22:58.426 "abort": true, 00:22:58.426 "seek_hole": false, 00:22:58.426 "seek_data": false, 00:22:58.426 "copy": true, 00:22:58.426 "nvme_iov_md": false 00:22:58.426 }, 00:22:58.426 "memory_domains": [ 00:22:58.426 { 00:22:58.426 "dma_device_id": "system", 00:22:58.426 "dma_device_type": 1 00:22:58.426 }, 00:22:58.426 { 00:22:58.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.426 "dma_device_type": 2 00:22:58.426 } 00:22:58.426 ], 00:22:58.426 "driver_specific": {} 00:22:58.426 }' 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.426 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.684 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:58.943 [2024-07-26 10:33:11.667201] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.943 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.202 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.202 "name": "Existed_Raid", 00:22:59.202 "uuid": "e039d8f4-0d0f-4ffb-850c-0561ef12456a", 00:22:59.202 "strip_size_kb": 0, 00:22:59.202 "state": "online", 00:22:59.202 "raid_level": "raid1", 00:22:59.202 "superblock": false, 00:22:59.202 "num_base_bdevs": 4, 00:22:59.202 "num_base_bdevs_discovered": 3, 00:22:59.202 "num_base_bdevs_operational": 3, 00:22:59.202 "base_bdevs_list": [ 00:22:59.202 { 00:22:59.202 "name": null, 00:22:59.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.202 "is_configured": false, 00:22:59.202 "data_offset": 0, 00:22:59.202 "data_size": 65536 00:22:59.202 }, 00:22:59.202 { 00:22:59.202 "name": "BaseBdev2", 00:22:59.202 "uuid": "637e8b76-9177-4345-b529-9f4fc7632804", 00:22:59.202 "is_configured": true, 00:22:59.202 "data_offset": 0, 00:22:59.202 "data_size": 65536 00:22:59.202 }, 00:22:59.202 { 00:22:59.202 "name": "BaseBdev3", 00:22:59.202 "uuid": "ad1752a9-7f85-4190-84a7-cef11e31ca0e", 00:22:59.202 "is_configured": true, 00:22:59.202 "data_offset": 0, 00:22:59.202 "data_size": 65536 00:22:59.202 }, 00:22:59.202 { 00:22:59.202 "name": "BaseBdev4", 00:22:59.202 "uuid": "afd782ab-5ac2-477d-87b6-7a99bdff9082", 00:22:59.202 "is_configured": true, 00:22:59.202 "data_offset": 0, 00:22:59.202 "data_size": 65536 00:22:59.202 } 00:22:59.202 ] 00:22:59.202 }' 00:22:59.202 10:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.202 10:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.769 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:59.769 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:59.769 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.769 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:00.027 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:00.027 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.027 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:00.027 [2024-07-26 10:33:12.923606] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:00.285 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:00.285 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:00.285 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.285 10:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:00.285 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:00.285 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.285 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:00.544 [2024-07-26 10:33:13.390822] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:00.544 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:00.544 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:00.544 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.544 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:00.801 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:00.801 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.801 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:01.058 [2024-07-26 10:33:13.846331] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:01.058 [2024-07-26 10:33:13.846402] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.058 [2024-07-26 10:33:13.856719] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.058 [2024-07-26 10:33:13.856748] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.058 [2024-07-26 10:33:13.856758] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb534f0 name Existed_Raid, state offline 00:23:01.058 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:01.058 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:01.058 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.058 10:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:01.315 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:01.572 BaseBdev2 00:23:01.572 10:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:01.573 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.138 10:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:02.397 [ 00:23:02.397 { 00:23:02.397 "name": "BaseBdev2", 00:23:02.397 "aliases": [ 00:23:02.397 "2aa6d793-f405-4f68-91b1-4737efdc9319" 00:23:02.397 ], 00:23:02.397 "product_name": "Malloc disk", 00:23:02.397 "block_size": 512, 00:23:02.397 "num_blocks": 65536, 00:23:02.397 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:02.397 "assigned_rate_limits": { 00:23:02.397 "rw_ios_per_sec": 0, 00:23:02.397 "rw_mbytes_per_sec": 0, 00:23:02.397 "r_mbytes_per_sec": 0, 00:23:02.397 "w_mbytes_per_sec": 0 00:23:02.397 }, 00:23:02.397 "claimed": false, 00:23:02.397 "zoned": false, 00:23:02.397 "supported_io_types": { 00:23:02.397 "read": true, 00:23:02.397 "write": true, 00:23:02.397 "unmap": true, 00:23:02.397 "flush": true, 00:23:02.397 "reset": true, 00:23:02.397 "nvme_admin": false, 00:23:02.397 "nvme_io": false, 00:23:02.397 "nvme_io_md": false, 00:23:02.397 "write_zeroes": true, 00:23:02.397 "zcopy": true, 00:23:02.397 "get_zone_info": false, 00:23:02.397 "zone_management": false, 00:23:02.397 "zone_append": false, 00:23:02.397 "compare": false, 00:23:02.397 "compare_and_write": false, 00:23:02.397 "abort": true, 00:23:02.397 "seek_hole": false, 00:23:02.397 "seek_data": false, 00:23:02.397 "copy": true, 00:23:02.397 "nvme_iov_md": false 00:23:02.397 }, 00:23:02.397 "memory_domains": [ 00:23:02.397 { 00:23:02.397 "dma_device_id": "system", 00:23:02.397 "dma_device_type": 1 00:23:02.397 }, 00:23:02.397 { 00:23:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.397 "dma_device_type": 2 00:23:02.397 } 00:23:02.397 ], 00:23:02.397 "driver_specific": {} 00:23:02.397 } 00:23:02.397 ] 00:23:02.397 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:02.397 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:02.397 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:02.397 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:02.655 BaseBdev3 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.913 10:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:03.521 [ 00:23:03.521 { 00:23:03.521 "name": "BaseBdev3", 00:23:03.521 "aliases": [ 00:23:03.521 "29207505-31d6-4460-8946-449c00b14834" 00:23:03.521 ], 00:23:03.521 "product_name": "Malloc disk", 00:23:03.521 "block_size": 512, 00:23:03.521 "num_blocks": 65536, 00:23:03.521 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:03.521 "assigned_rate_limits": { 00:23:03.521 "rw_ios_per_sec": 0, 00:23:03.521 "rw_mbytes_per_sec": 0, 00:23:03.521 "r_mbytes_per_sec": 0, 00:23:03.521 "w_mbytes_per_sec": 0 00:23:03.521 }, 00:23:03.521 "claimed": false, 00:23:03.521 "zoned": false, 00:23:03.521 "supported_io_types": { 00:23:03.521 "read": true, 00:23:03.521 "write": true, 00:23:03.521 "unmap": true, 00:23:03.521 "flush": true, 00:23:03.521 "reset": true, 00:23:03.521 "nvme_admin": false, 00:23:03.521 "nvme_io": false, 00:23:03.521 "nvme_io_md": false, 00:23:03.521 "write_zeroes": true, 00:23:03.521 "zcopy": true, 00:23:03.521 "get_zone_info": false, 00:23:03.521 "zone_management": false, 00:23:03.521 "zone_append": false, 00:23:03.521 "compare": false, 00:23:03.521 "compare_and_write": false, 00:23:03.521 "abort": true, 00:23:03.521 "seek_hole": false, 00:23:03.521 "seek_data": false, 00:23:03.521 "copy": true, 00:23:03.521 "nvme_iov_md": false 00:23:03.521 }, 00:23:03.521 "memory_domains": [ 00:23:03.521 { 00:23:03.521 "dma_device_id": "system", 00:23:03.521 "dma_device_type": 1 00:23:03.521 }, 00:23:03.521 { 00:23:03.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.521 "dma_device_type": 2 00:23:03.521 } 00:23:03.521 ], 00:23:03.521 "driver_specific": {} 00:23:03.521 } 00:23:03.521 ] 00:23:03.521 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:03.521 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:03.521 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:03.521 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:03.779 BaseBdev4 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:03.779 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.038 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:04.296 [ 00:23:04.296 { 00:23:04.296 "name": "BaseBdev4", 00:23:04.296 "aliases": [ 00:23:04.296 "04375e52-d53e-4405-9f3c-200d7d157e50" 00:23:04.296 ], 00:23:04.296 "product_name": "Malloc disk", 00:23:04.296 "block_size": 512, 00:23:04.296 "num_blocks": 65536, 00:23:04.296 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:04.296 "assigned_rate_limits": { 00:23:04.296 "rw_ios_per_sec": 0, 00:23:04.296 "rw_mbytes_per_sec": 0, 00:23:04.296 "r_mbytes_per_sec": 0, 00:23:04.296 "w_mbytes_per_sec": 0 00:23:04.296 }, 00:23:04.296 "claimed": false, 00:23:04.296 "zoned": false, 00:23:04.296 "supported_io_types": { 00:23:04.297 "read": true, 00:23:04.297 "write": true, 00:23:04.297 "unmap": true, 00:23:04.297 "flush": true, 00:23:04.297 "reset": true, 00:23:04.297 "nvme_admin": false, 00:23:04.297 "nvme_io": false, 00:23:04.297 "nvme_io_md": false, 00:23:04.297 "write_zeroes": true, 00:23:04.297 "zcopy": true, 00:23:04.297 "get_zone_info": false, 00:23:04.297 "zone_management": false, 00:23:04.297 "zone_append": false, 00:23:04.297 "compare": false, 00:23:04.297 "compare_and_write": false, 00:23:04.297 "abort": true, 00:23:04.297 "seek_hole": false, 00:23:04.297 "seek_data": false, 00:23:04.297 "copy": true, 00:23:04.297 "nvme_iov_md": false 00:23:04.297 }, 00:23:04.297 "memory_domains": [ 00:23:04.297 { 00:23:04.297 "dma_device_id": "system", 00:23:04.297 "dma_device_type": 1 00:23:04.297 }, 00:23:04.297 { 00:23:04.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.297 "dma_device_type": 2 00:23:04.297 } 00:23:04.297 ], 00:23:04.297 "driver_specific": {} 00:23:04.297 } 00:23:04.297 ] 00:23:04.297 10:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:04.297 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:04.297 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:04.297 10:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:04.555 [2024-07-26 10:33:17.205924] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:04.555 [2024-07-26 10:33:17.205962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:04.555 [2024-07-26 10:33:17.205980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.555 [2024-07-26 10:33:17.207194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:04.555 [2024-07-26 10:33:17.207232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.555 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.813 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.813 "name": "Existed_Raid", 00:23:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.813 "strip_size_kb": 0, 00:23:04.813 "state": "configuring", 00:23:04.813 "raid_level": "raid1", 00:23:04.813 "superblock": false, 00:23:04.813 "num_base_bdevs": 4, 00:23:04.813 "num_base_bdevs_discovered": 3, 00:23:04.813 "num_base_bdevs_operational": 4, 00:23:04.813 "base_bdevs_list": [ 00:23:04.813 { 00:23:04.813 "name": "BaseBdev1", 00:23:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.813 "is_configured": false, 00:23:04.813 "data_offset": 0, 00:23:04.813 "data_size": 0 00:23:04.813 }, 00:23:04.813 { 00:23:04.813 "name": "BaseBdev2", 00:23:04.813 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:04.813 "is_configured": true, 00:23:04.813 "data_offset": 0, 00:23:04.814 "data_size": 65536 00:23:04.814 }, 00:23:04.814 { 00:23:04.814 "name": "BaseBdev3", 00:23:04.814 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:04.814 "is_configured": true, 00:23:04.814 "data_offset": 0, 00:23:04.814 "data_size": 65536 00:23:04.814 }, 00:23:04.814 { 00:23:04.814 "name": "BaseBdev4", 00:23:04.814 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:04.814 "is_configured": true, 00:23:04.814 "data_offset": 0, 00:23:04.814 "data_size": 65536 00:23:04.814 } 00:23:04.814 ] 00:23:04.814 }' 00:23:04.814 10:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.814 10:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:05.380 [2024-07-26 10:33:18.240746] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.380 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.638 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.638 "name": "Existed_Raid", 00:23:05.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.638 "strip_size_kb": 0, 00:23:05.638 "state": "configuring", 00:23:05.638 "raid_level": "raid1", 00:23:05.638 "superblock": false, 00:23:05.638 "num_base_bdevs": 4, 00:23:05.638 "num_base_bdevs_discovered": 2, 00:23:05.638 "num_base_bdevs_operational": 4, 00:23:05.638 "base_bdevs_list": [ 00:23:05.638 { 00:23:05.638 "name": "BaseBdev1", 00:23:05.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.638 "is_configured": false, 00:23:05.638 "data_offset": 0, 00:23:05.638 "data_size": 0 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "name": null, 00:23:05.638 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:05.638 "is_configured": false, 00:23:05.638 "data_offset": 0, 00:23:05.638 "data_size": 65536 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "name": "BaseBdev3", 00:23:05.638 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:05.638 "is_configured": true, 00:23:05.638 "data_offset": 0, 00:23:05.638 "data_size": 65536 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "name": "BaseBdev4", 00:23:05.638 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:05.638 "is_configured": true, 00:23:05.638 "data_offset": 0, 00:23:05.638 "data_size": 65536 00:23:05.638 } 00:23:05.638 ] 00:23:05.638 }' 00:23:05.638 10:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.638 10:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.204 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.204 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:06.462 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:06.462 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:06.720 [2024-07-26 10:33:19.503116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:06.720 BaseBdev1 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:06.720 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.978 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:07.236 [ 00:23:07.236 { 00:23:07.236 "name": "BaseBdev1", 00:23:07.236 "aliases": [ 00:23:07.236 "871df9b4-aa99-42f8-bcdd-723564315d41" 00:23:07.236 ], 00:23:07.237 "product_name": "Malloc disk", 00:23:07.237 "block_size": 512, 00:23:07.237 "num_blocks": 65536, 00:23:07.237 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:07.237 "assigned_rate_limits": { 00:23:07.237 "rw_ios_per_sec": 0, 00:23:07.237 "rw_mbytes_per_sec": 0, 00:23:07.237 "r_mbytes_per_sec": 0, 00:23:07.237 "w_mbytes_per_sec": 0 00:23:07.237 }, 00:23:07.237 "claimed": true, 00:23:07.237 "claim_type": "exclusive_write", 00:23:07.237 "zoned": false, 00:23:07.237 "supported_io_types": { 00:23:07.237 "read": true, 00:23:07.237 "write": true, 00:23:07.237 "unmap": true, 00:23:07.237 "flush": true, 00:23:07.237 "reset": true, 00:23:07.237 "nvme_admin": false, 00:23:07.237 "nvme_io": false, 00:23:07.237 "nvme_io_md": false, 00:23:07.237 "write_zeroes": true, 00:23:07.237 "zcopy": true, 00:23:07.237 "get_zone_info": false, 00:23:07.237 "zone_management": false, 00:23:07.237 "zone_append": false, 00:23:07.237 "compare": false, 00:23:07.237 "compare_and_write": false, 00:23:07.237 "abort": true, 00:23:07.237 "seek_hole": false, 00:23:07.237 "seek_data": false, 00:23:07.237 "copy": true, 00:23:07.237 "nvme_iov_md": false 00:23:07.237 }, 00:23:07.237 "memory_domains": [ 00:23:07.237 { 00:23:07.237 "dma_device_id": "system", 00:23:07.237 "dma_device_type": 1 00:23:07.237 }, 00:23:07.237 { 00:23:07.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.237 "dma_device_type": 2 00:23:07.237 } 00:23:07.237 ], 00:23:07.237 "driver_specific": {} 00:23:07.237 } 00:23:07.237 ] 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.237 10:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.495 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.495 "name": "Existed_Raid", 00:23:07.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.495 "strip_size_kb": 0, 00:23:07.495 "state": "configuring", 00:23:07.495 "raid_level": "raid1", 00:23:07.495 "superblock": false, 00:23:07.495 "num_base_bdevs": 4, 00:23:07.495 "num_base_bdevs_discovered": 3, 00:23:07.495 "num_base_bdevs_operational": 4, 00:23:07.495 "base_bdevs_list": [ 00:23:07.495 { 00:23:07.495 "name": "BaseBdev1", 00:23:07.495 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:07.495 "is_configured": true, 00:23:07.495 "data_offset": 0, 00:23:07.495 "data_size": 65536 00:23:07.495 }, 00:23:07.495 { 00:23:07.495 "name": null, 00:23:07.495 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:07.495 "is_configured": false, 00:23:07.495 "data_offset": 0, 00:23:07.495 "data_size": 65536 00:23:07.495 }, 00:23:07.495 { 00:23:07.495 "name": "BaseBdev3", 00:23:07.495 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:07.495 "is_configured": true, 00:23:07.495 "data_offset": 0, 00:23:07.495 "data_size": 65536 00:23:07.495 }, 00:23:07.495 { 00:23:07.495 "name": "BaseBdev4", 00:23:07.495 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:07.495 "is_configured": true, 00:23:07.495 "data_offset": 0, 00:23:07.495 "data_size": 65536 00:23:07.495 } 00:23:07.495 ] 00:23:07.495 }' 00:23:07.495 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.495 10:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.062 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:08.062 10:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.320 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:08.320 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:08.320 [2024-07-26 10:33:21.219645] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.579 "name": "Existed_Raid", 00:23:08.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.579 "strip_size_kb": 0, 00:23:08.579 "state": "configuring", 00:23:08.579 "raid_level": "raid1", 00:23:08.579 "superblock": false, 00:23:08.579 "num_base_bdevs": 4, 00:23:08.579 "num_base_bdevs_discovered": 2, 00:23:08.579 "num_base_bdevs_operational": 4, 00:23:08.579 "base_bdevs_list": [ 00:23:08.579 { 00:23:08.579 "name": "BaseBdev1", 00:23:08.579 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:08.579 "is_configured": true, 00:23:08.579 "data_offset": 0, 00:23:08.579 "data_size": 65536 00:23:08.579 }, 00:23:08.579 { 00:23:08.579 "name": null, 00:23:08.579 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:08.579 "is_configured": false, 00:23:08.579 "data_offset": 0, 00:23:08.579 "data_size": 65536 00:23:08.579 }, 00:23:08.579 { 00:23:08.579 "name": null, 00:23:08.579 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:08.579 "is_configured": false, 00:23:08.579 "data_offset": 0, 00:23:08.579 "data_size": 65536 00:23:08.579 }, 00:23:08.579 { 00:23:08.579 "name": "BaseBdev4", 00:23:08.579 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:08.579 "is_configured": true, 00:23:08.579 "data_offset": 0, 00:23:08.579 "data_size": 65536 00:23:08.579 } 00:23:08.579 ] 00:23:08.579 }' 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.579 10:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.145 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.145 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:09.402 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:09.402 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:09.659 [2024-07-26 10:33:22.470954] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.659 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:09.659 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.659 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.659 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.659 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.660 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.917 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.917 "name": "Existed_Raid", 00:23:09.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.917 "strip_size_kb": 0, 00:23:09.917 "state": "configuring", 00:23:09.917 "raid_level": "raid1", 00:23:09.918 "superblock": false, 00:23:09.918 "num_base_bdevs": 4, 00:23:09.918 "num_base_bdevs_discovered": 3, 00:23:09.918 "num_base_bdevs_operational": 4, 00:23:09.918 "base_bdevs_list": [ 00:23:09.918 { 00:23:09.918 "name": "BaseBdev1", 00:23:09.918 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:09.918 "is_configured": true, 00:23:09.918 "data_offset": 0, 00:23:09.918 "data_size": 65536 00:23:09.918 }, 00:23:09.918 { 00:23:09.918 "name": null, 00:23:09.918 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:09.918 "is_configured": false, 00:23:09.918 "data_offset": 0, 00:23:09.918 "data_size": 65536 00:23:09.918 }, 00:23:09.918 { 00:23:09.918 "name": "BaseBdev3", 00:23:09.918 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:09.918 "is_configured": true, 00:23:09.918 "data_offset": 0, 00:23:09.918 "data_size": 65536 00:23:09.918 }, 00:23:09.918 { 00:23:09.918 "name": "BaseBdev4", 00:23:09.918 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:09.918 "is_configured": true, 00:23:09.918 "data_offset": 0, 00:23:09.918 "data_size": 65536 00:23:09.918 } 00:23:09.918 ] 00:23:09.918 }' 00:23:09.918 10:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.918 10:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.482 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.483 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:10.740 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:10.740 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:10.997 [2024-07-26 10:33:23.706220] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.997 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.254 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.254 "name": "Existed_Raid", 00:23:11.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.254 "strip_size_kb": 0, 00:23:11.254 "state": "configuring", 00:23:11.254 "raid_level": "raid1", 00:23:11.254 "superblock": false, 00:23:11.254 "num_base_bdevs": 4, 00:23:11.254 "num_base_bdevs_discovered": 2, 00:23:11.254 "num_base_bdevs_operational": 4, 00:23:11.254 "base_bdevs_list": [ 00:23:11.254 { 00:23:11.254 "name": null, 00:23:11.254 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:11.254 "is_configured": false, 00:23:11.254 "data_offset": 0, 00:23:11.254 "data_size": 65536 00:23:11.254 }, 00:23:11.254 { 00:23:11.254 "name": null, 00:23:11.254 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:11.254 "is_configured": false, 00:23:11.254 "data_offset": 0, 00:23:11.254 "data_size": 65536 00:23:11.254 }, 00:23:11.254 { 00:23:11.254 "name": "BaseBdev3", 00:23:11.254 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:11.254 "is_configured": true, 00:23:11.254 "data_offset": 0, 00:23:11.254 "data_size": 65536 00:23:11.254 }, 00:23:11.254 { 00:23:11.254 "name": "BaseBdev4", 00:23:11.254 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:11.254 "is_configured": true, 00:23:11.254 "data_offset": 0, 00:23:11.254 "data_size": 65536 00:23:11.254 } 00:23:11.254 ] 00:23:11.254 }' 00:23:11.254 10:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.254 10:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.818 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:11.818 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.075 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:12.075 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:12.075 [2024-07-26 10:33:24.963543] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.332 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.333 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.333 10:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.333 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.333 "name": "Existed_Raid", 00:23:12.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.333 "strip_size_kb": 0, 00:23:12.333 "state": "configuring", 00:23:12.333 "raid_level": "raid1", 00:23:12.333 "superblock": false, 00:23:12.333 "num_base_bdevs": 4, 00:23:12.333 "num_base_bdevs_discovered": 3, 00:23:12.333 "num_base_bdevs_operational": 4, 00:23:12.333 "base_bdevs_list": [ 00:23:12.333 { 00:23:12.333 "name": null, 00:23:12.333 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:12.333 "is_configured": false, 00:23:12.333 "data_offset": 0, 00:23:12.333 "data_size": 65536 00:23:12.333 }, 00:23:12.333 { 00:23:12.333 "name": "BaseBdev2", 00:23:12.333 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:12.333 "is_configured": true, 00:23:12.333 "data_offset": 0, 00:23:12.333 "data_size": 65536 00:23:12.333 }, 00:23:12.333 { 00:23:12.333 "name": "BaseBdev3", 00:23:12.333 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:12.333 "is_configured": true, 00:23:12.333 "data_offset": 0, 00:23:12.333 "data_size": 65536 00:23:12.333 }, 00:23:12.333 { 00:23:12.333 "name": "BaseBdev4", 00:23:12.333 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:12.333 "is_configured": true, 00:23:12.333 "data_offset": 0, 00:23:12.333 "data_size": 65536 00:23:12.333 } 00:23:12.333 ] 00:23:12.333 }' 00:23:12.333 10:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.333 10:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.265 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.265 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:13.522 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:13.522 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.522 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:13.780 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 871df9b4-aa99-42f8-bcdd-723564315d41 00:23:14.037 [2024-07-26 10:33:26.751362] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:14.037 [2024-07-26 10:33:26.751395] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x9a4800 00:23:14.037 [2024-07-26 10:33:26.751403] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:14.037 [2024-07-26 10:33:26.751580] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98d1d0 00:23:14.037 [2024-07-26 10:33:26.751693] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9a4800 00:23:14.037 [2024-07-26 10:33:26.751702] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x9a4800 00:23:14.037 [2024-07-26 10:33:26.751843] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.037 NewBaseBdev 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:14.037 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:14.294 10:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:14.551 [ 00:23:14.551 { 00:23:14.551 "name": "NewBaseBdev", 00:23:14.551 "aliases": [ 00:23:14.551 "871df9b4-aa99-42f8-bcdd-723564315d41" 00:23:14.551 ], 00:23:14.551 "product_name": "Malloc disk", 00:23:14.551 "block_size": 512, 00:23:14.551 "num_blocks": 65536, 00:23:14.551 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:14.551 "assigned_rate_limits": { 00:23:14.551 "rw_ios_per_sec": 0, 00:23:14.551 "rw_mbytes_per_sec": 0, 00:23:14.551 "r_mbytes_per_sec": 0, 00:23:14.551 "w_mbytes_per_sec": 0 00:23:14.551 }, 00:23:14.551 "claimed": true, 00:23:14.551 "claim_type": "exclusive_write", 00:23:14.551 "zoned": false, 00:23:14.551 "supported_io_types": { 00:23:14.551 "read": true, 00:23:14.551 "write": true, 00:23:14.551 "unmap": true, 00:23:14.551 "flush": true, 00:23:14.551 "reset": true, 00:23:14.551 "nvme_admin": false, 00:23:14.551 "nvme_io": false, 00:23:14.551 "nvme_io_md": false, 00:23:14.551 "write_zeroes": true, 00:23:14.551 "zcopy": true, 00:23:14.551 "get_zone_info": false, 00:23:14.551 "zone_management": false, 00:23:14.551 "zone_append": false, 00:23:14.551 "compare": false, 00:23:14.551 "compare_and_write": false, 00:23:14.551 "abort": true, 00:23:14.551 "seek_hole": false, 00:23:14.551 "seek_data": false, 00:23:14.551 "copy": true, 00:23:14.551 "nvme_iov_md": false 00:23:14.551 }, 00:23:14.551 "memory_domains": [ 00:23:14.551 { 00:23:14.551 "dma_device_id": "system", 00:23:14.551 "dma_device_type": 1 00:23:14.551 }, 00:23:14.551 { 00:23:14.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.551 "dma_device_type": 2 00:23:14.551 } 00:23:14.551 ], 00:23:14.551 "driver_specific": {} 00:23:14.551 } 00:23:14.551 ] 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.551 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.809 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.809 "name": "Existed_Raid", 00:23:14.809 "uuid": "c6cefd8b-c719-45ff-89c5-ada79b287b00", 00:23:14.809 "strip_size_kb": 0, 00:23:14.809 "state": "online", 00:23:14.809 "raid_level": "raid1", 00:23:14.809 "superblock": false, 00:23:14.809 "num_base_bdevs": 4, 00:23:14.809 "num_base_bdevs_discovered": 4, 00:23:14.809 "num_base_bdevs_operational": 4, 00:23:14.809 "base_bdevs_list": [ 00:23:14.809 { 00:23:14.809 "name": "NewBaseBdev", 00:23:14.809 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:14.809 "is_configured": true, 00:23:14.809 "data_offset": 0, 00:23:14.809 "data_size": 65536 00:23:14.809 }, 00:23:14.809 { 00:23:14.809 "name": "BaseBdev2", 00:23:14.809 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:14.809 "is_configured": true, 00:23:14.809 "data_offset": 0, 00:23:14.809 "data_size": 65536 00:23:14.809 }, 00:23:14.809 { 00:23:14.809 "name": "BaseBdev3", 00:23:14.809 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:14.809 "is_configured": true, 00:23:14.809 "data_offset": 0, 00:23:14.809 "data_size": 65536 00:23:14.809 }, 00:23:14.809 { 00:23:14.809 "name": "BaseBdev4", 00:23:14.809 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:14.809 "is_configured": true, 00:23:14.809 "data_offset": 0, 00:23:14.809 "data_size": 65536 00:23:14.809 } 00:23:14.809 ] 00:23:14.809 }' 00:23:14.809 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.809 10:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:15.375 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:15.375 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:15.375 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:15.376 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:15.376 10:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:15.376 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:15.376 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:15.376 [2024-07-26 10:33:28.211509] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.376 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:15.376 "name": "Existed_Raid", 00:23:15.376 "aliases": [ 00:23:15.376 "c6cefd8b-c719-45ff-89c5-ada79b287b00" 00:23:15.376 ], 00:23:15.376 "product_name": "Raid Volume", 00:23:15.376 "block_size": 512, 00:23:15.376 "num_blocks": 65536, 00:23:15.376 "uuid": "c6cefd8b-c719-45ff-89c5-ada79b287b00", 00:23:15.376 "assigned_rate_limits": { 00:23:15.376 "rw_ios_per_sec": 0, 00:23:15.376 "rw_mbytes_per_sec": 0, 00:23:15.376 "r_mbytes_per_sec": 0, 00:23:15.376 "w_mbytes_per_sec": 0 00:23:15.376 }, 00:23:15.376 "claimed": false, 00:23:15.376 "zoned": false, 00:23:15.376 "supported_io_types": { 00:23:15.376 "read": true, 00:23:15.376 "write": true, 00:23:15.376 "unmap": false, 00:23:15.376 "flush": false, 00:23:15.376 "reset": true, 00:23:15.376 "nvme_admin": false, 00:23:15.376 "nvme_io": false, 00:23:15.376 "nvme_io_md": false, 00:23:15.376 "write_zeroes": true, 00:23:15.376 "zcopy": false, 00:23:15.376 "get_zone_info": false, 00:23:15.376 "zone_management": false, 00:23:15.376 "zone_append": false, 00:23:15.376 "compare": false, 00:23:15.376 "compare_and_write": false, 00:23:15.376 "abort": false, 00:23:15.376 "seek_hole": false, 00:23:15.376 "seek_data": false, 00:23:15.376 "copy": false, 00:23:15.376 "nvme_iov_md": false 00:23:15.376 }, 00:23:15.376 "memory_domains": [ 00:23:15.376 { 00:23:15.376 "dma_device_id": "system", 00:23:15.376 "dma_device_type": 1 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.376 "dma_device_type": 2 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "system", 00:23:15.376 "dma_device_type": 1 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.376 "dma_device_type": 2 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "system", 00:23:15.376 "dma_device_type": 1 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.376 "dma_device_type": 2 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "system", 00:23:15.376 "dma_device_type": 1 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.376 "dma_device_type": 2 00:23:15.376 } 00:23:15.376 ], 00:23:15.376 "driver_specific": { 00:23:15.376 "raid": { 00:23:15.376 "uuid": "c6cefd8b-c719-45ff-89c5-ada79b287b00", 00:23:15.376 "strip_size_kb": 0, 00:23:15.376 "state": "online", 00:23:15.376 "raid_level": "raid1", 00:23:15.376 "superblock": false, 00:23:15.376 "num_base_bdevs": 4, 00:23:15.376 "num_base_bdevs_discovered": 4, 00:23:15.376 "num_base_bdevs_operational": 4, 00:23:15.376 "base_bdevs_list": [ 00:23:15.376 { 00:23:15.376 "name": "NewBaseBdev", 00:23:15.376 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:15.376 "is_configured": true, 00:23:15.376 "data_offset": 0, 00:23:15.376 "data_size": 65536 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "name": "BaseBdev2", 00:23:15.376 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:15.376 "is_configured": true, 00:23:15.376 "data_offset": 0, 00:23:15.376 "data_size": 65536 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "name": "BaseBdev3", 00:23:15.376 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:15.376 "is_configured": true, 00:23:15.376 "data_offset": 0, 00:23:15.376 "data_size": 65536 00:23:15.376 }, 00:23:15.376 { 00:23:15.376 "name": "BaseBdev4", 00:23:15.376 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:15.376 "is_configured": true, 00:23:15.376 "data_offset": 0, 00:23:15.376 "data_size": 65536 00:23:15.376 } 00:23:15.376 ] 00:23:15.376 } 00:23:15.376 } 00:23:15.376 }' 00:23:15.376 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.376 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:15.376 BaseBdev2 00:23:15.376 BaseBdev3 00:23:15.376 BaseBdev4' 00:23:15.634 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.634 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:15.634 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.634 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.634 "name": "NewBaseBdev", 00:23:15.634 "aliases": [ 00:23:15.634 "871df9b4-aa99-42f8-bcdd-723564315d41" 00:23:15.634 ], 00:23:15.634 "product_name": "Malloc disk", 00:23:15.634 "block_size": 512, 00:23:15.634 "num_blocks": 65536, 00:23:15.634 "uuid": "871df9b4-aa99-42f8-bcdd-723564315d41", 00:23:15.634 "assigned_rate_limits": { 00:23:15.634 "rw_ios_per_sec": 0, 00:23:15.634 "rw_mbytes_per_sec": 0, 00:23:15.634 "r_mbytes_per_sec": 0, 00:23:15.634 "w_mbytes_per_sec": 0 00:23:15.634 }, 00:23:15.634 "claimed": true, 00:23:15.634 "claim_type": "exclusive_write", 00:23:15.634 "zoned": false, 00:23:15.634 "supported_io_types": { 00:23:15.634 "read": true, 00:23:15.634 "write": true, 00:23:15.634 "unmap": true, 00:23:15.634 "flush": true, 00:23:15.634 "reset": true, 00:23:15.634 "nvme_admin": false, 00:23:15.634 "nvme_io": false, 00:23:15.634 "nvme_io_md": false, 00:23:15.634 "write_zeroes": true, 00:23:15.634 "zcopy": true, 00:23:15.634 "get_zone_info": false, 00:23:15.634 "zone_management": false, 00:23:15.634 "zone_append": false, 00:23:15.634 "compare": false, 00:23:15.634 "compare_and_write": false, 00:23:15.634 "abort": true, 00:23:15.634 "seek_hole": false, 00:23:15.634 "seek_data": false, 00:23:15.634 "copy": true, 00:23:15.634 "nvme_iov_md": false 00:23:15.634 }, 00:23:15.634 "memory_domains": [ 00:23:15.634 { 00:23:15.634 "dma_device_id": "system", 00:23:15.634 "dma_device_type": 1 00:23:15.634 }, 00:23:15.634 { 00:23:15.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.634 "dma_device_type": 2 00:23:15.634 } 00:23:15.634 ], 00:23:15.634 "driver_specific": {} 00:23:15.634 }' 00:23:15.634 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.892 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.151 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.151 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.151 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:16.151 10:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.408 "name": "BaseBdev2", 00:23:16.408 "aliases": [ 00:23:16.408 "2aa6d793-f405-4f68-91b1-4737efdc9319" 00:23:16.408 ], 00:23:16.408 "product_name": "Malloc disk", 00:23:16.408 "block_size": 512, 00:23:16.408 "num_blocks": 65536, 00:23:16.408 "uuid": "2aa6d793-f405-4f68-91b1-4737efdc9319", 00:23:16.408 "assigned_rate_limits": { 00:23:16.408 "rw_ios_per_sec": 0, 00:23:16.408 "rw_mbytes_per_sec": 0, 00:23:16.408 "r_mbytes_per_sec": 0, 00:23:16.408 "w_mbytes_per_sec": 0 00:23:16.408 }, 00:23:16.408 "claimed": true, 00:23:16.408 "claim_type": "exclusive_write", 00:23:16.408 "zoned": false, 00:23:16.408 "supported_io_types": { 00:23:16.408 "read": true, 00:23:16.408 "write": true, 00:23:16.408 "unmap": true, 00:23:16.408 "flush": true, 00:23:16.408 "reset": true, 00:23:16.408 "nvme_admin": false, 00:23:16.408 "nvme_io": false, 00:23:16.408 "nvme_io_md": false, 00:23:16.408 "write_zeroes": true, 00:23:16.408 "zcopy": true, 00:23:16.408 "get_zone_info": false, 00:23:16.408 "zone_management": false, 00:23:16.408 "zone_append": false, 00:23:16.408 "compare": false, 00:23:16.408 "compare_and_write": false, 00:23:16.408 "abort": true, 00:23:16.408 "seek_hole": false, 00:23:16.408 "seek_data": false, 00:23:16.408 "copy": true, 00:23:16.408 "nvme_iov_md": false 00:23:16.408 }, 00:23:16.408 "memory_domains": [ 00:23:16.408 { 00:23:16.408 "dma_device_id": "system", 00:23:16.408 "dma_device_type": 1 00:23:16.408 }, 00:23:16.408 { 00:23:16.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.408 "dma_device_type": 2 00:23:16.408 } 00:23:16.408 ], 00:23:16.408 "driver_specific": {} 00:23:16.408 }' 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.408 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:16.665 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.922 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.922 "name": "BaseBdev3", 00:23:16.922 "aliases": [ 00:23:16.922 "29207505-31d6-4460-8946-449c00b14834" 00:23:16.922 ], 00:23:16.922 "product_name": "Malloc disk", 00:23:16.922 "block_size": 512, 00:23:16.922 "num_blocks": 65536, 00:23:16.922 "uuid": "29207505-31d6-4460-8946-449c00b14834", 00:23:16.922 "assigned_rate_limits": { 00:23:16.922 "rw_ios_per_sec": 0, 00:23:16.922 "rw_mbytes_per_sec": 0, 00:23:16.922 "r_mbytes_per_sec": 0, 00:23:16.922 "w_mbytes_per_sec": 0 00:23:16.922 }, 00:23:16.922 "claimed": true, 00:23:16.922 "claim_type": "exclusive_write", 00:23:16.922 "zoned": false, 00:23:16.922 "supported_io_types": { 00:23:16.922 "read": true, 00:23:16.922 "write": true, 00:23:16.922 "unmap": true, 00:23:16.922 "flush": true, 00:23:16.922 "reset": true, 00:23:16.923 "nvme_admin": false, 00:23:16.923 "nvme_io": false, 00:23:16.923 "nvme_io_md": false, 00:23:16.923 "write_zeroes": true, 00:23:16.923 "zcopy": true, 00:23:16.923 "get_zone_info": false, 00:23:16.923 "zone_management": false, 00:23:16.923 "zone_append": false, 00:23:16.923 "compare": false, 00:23:16.923 "compare_and_write": false, 00:23:16.923 "abort": true, 00:23:16.923 "seek_hole": false, 00:23:16.923 "seek_data": false, 00:23:16.923 "copy": true, 00:23:16.923 "nvme_iov_md": false 00:23:16.923 }, 00:23:16.923 "memory_domains": [ 00:23:16.923 { 00:23:16.923 "dma_device_id": "system", 00:23:16.923 "dma_device_type": 1 00:23:16.923 }, 00:23:16.923 { 00:23:16.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.923 "dma_device_type": 2 00:23:16.923 } 00:23:16.923 ], 00:23:16.923 "driver_specific": {} 00:23:16.923 }' 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.923 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:17.181 10:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.438 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.438 "name": "BaseBdev4", 00:23:17.438 "aliases": [ 00:23:17.438 "04375e52-d53e-4405-9f3c-200d7d157e50" 00:23:17.438 ], 00:23:17.438 "product_name": "Malloc disk", 00:23:17.438 "block_size": 512, 00:23:17.438 "num_blocks": 65536, 00:23:17.438 "uuid": "04375e52-d53e-4405-9f3c-200d7d157e50", 00:23:17.438 "assigned_rate_limits": { 00:23:17.438 "rw_ios_per_sec": 0, 00:23:17.438 "rw_mbytes_per_sec": 0, 00:23:17.438 "r_mbytes_per_sec": 0, 00:23:17.438 "w_mbytes_per_sec": 0 00:23:17.438 }, 00:23:17.438 "claimed": true, 00:23:17.438 "claim_type": "exclusive_write", 00:23:17.438 "zoned": false, 00:23:17.438 "supported_io_types": { 00:23:17.438 "read": true, 00:23:17.438 "write": true, 00:23:17.438 "unmap": true, 00:23:17.438 "flush": true, 00:23:17.438 "reset": true, 00:23:17.438 "nvme_admin": false, 00:23:17.438 "nvme_io": false, 00:23:17.438 "nvme_io_md": false, 00:23:17.438 "write_zeroes": true, 00:23:17.438 "zcopy": true, 00:23:17.438 "get_zone_info": false, 00:23:17.438 "zone_management": false, 00:23:17.438 "zone_append": false, 00:23:17.438 "compare": false, 00:23:17.438 "compare_and_write": false, 00:23:17.438 "abort": true, 00:23:17.438 "seek_hole": false, 00:23:17.438 "seek_data": false, 00:23:17.438 "copy": true, 00:23:17.438 "nvme_iov_md": false 00:23:17.438 }, 00:23:17.438 "memory_domains": [ 00:23:17.438 { 00:23:17.438 "dma_device_id": "system", 00:23:17.438 "dma_device_type": 1 00:23:17.438 }, 00:23:17.438 { 00:23:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.438 "dma_device_type": 2 00:23:17.438 } 00:23:17.438 ], 00:23:17.438 "driver_specific": {} 00:23:17.438 }' 00:23:17.438 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.438 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.438 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.438 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.696 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.954 [2024-07-26 10:33:30.782008] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.954 [2024-07-26 10:33:30.782031] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.954 [2024-07-26 10:33:30.782079] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.954 [2024-07-26 10:33:30.782322] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.954 [2024-07-26 10:33:30.782335] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9a4800 name Existed_Raid, state offline 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3453393 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3453393 ']' 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3453393 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.954 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3453393 00:23:18.212 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:18.212 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:18.212 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3453393' 00:23:18.212 killing process with pid 3453393 00:23:18.212 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3453393 00:23:18.212 [2024-07-26 10:33:30.858384] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:18.212 10:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3453393 00:23:18.212 [2024-07-26 10:33:30.888739] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.212 10:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:18.212 00:23:18.212 real 0m31.470s 00:23:18.212 user 0m57.751s 00:23:18.212 sys 0m5.757s 00:23:18.212 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.212 10:33:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.212 ************************************ 00:23:18.212 END TEST raid_state_function_test 00:23:18.212 ************************************ 00:23:18.212 10:33:31 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:23:18.212 10:33:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:18.212 10:33:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.212 10:33:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.470 ************************************ 00:23:18.470 START TEST raid_state_function_test_sb 00:23:18.470 ************************************ 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:18.470 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3459879 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3459879' 00:23:18.471 Process raid pid: 3459879 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3459879 /var/tmp/spdk-raid.sock 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3459879 ']' 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.471 10:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.471 [2024-07-26 10:33:31.211247] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:18.471 [2024-07-26 10:33:31.211302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:18.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:18.471 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:18.471 [2024-07-26 10:33:31.338050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.729 [2024-07-26 10:33:31.383426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.729 [2024-07-26 10:33:31.442471] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.729 [2024-07-26 10:33:31.442518] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.294 10:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.294 10:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:19.294 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:19.552 [2024-07-26 10:33:32.343885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.552 [2024-07-26 10:33:32.343921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.552 [2024-07-26 10:33:32.343931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.552 [2024-07-26 10:33:32.343942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.552 [2024-07-26 10:33:32.343950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:19.552 [2024-07-26 10:33:32.343960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.552 [2024-07-26 10:33:32.343968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:19.552 [2024-07-26 10:33:32.343984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.552 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.810 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.810 "name": "Existed_Raid", 00:23:19.810 "uuid": "c0844cb3-6826-48a9-b89a-2cabc95d3f81", 00:23:19.810 "strip_size_kb": 0, 00:23:19.810 "state": "configuring", 00:23:19.810 "raid_level": "raid1", 00:23:19.810 "superblock": true, 00:23:19.810 "num_base_bdevs": 4, 00:23:19.810 "num_base_bdevs_discovered": 0, 00:23:19.810 "num_base_bdevs_operational": 4, 00:23:19.810 "base_bdevs_list": [ 00:23:19.810 { 00:23:19.810 "name": "BaseBdev1", 00:23:19.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.810 "is_configured": false, 00:23:19.810 "data_offset": 0, 00:23:19.810 "data_size": 0 00:23:19.810 }, 00:23:19.810 { 00:23:19.810 "name": "BaseBdev2", 00:23:19.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.810 "is_configured": false, 00:23:19.810 "data_offset": 0, 00:23:19.810 "data_size": 0 00:23:19.810 }, 00:23:19.810 { 00:23:19.810 "name": "BaseBdev3", 00:23:19.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.810 "is_configured": false, 00:23:19.810 "data_offset": 0, 00:23:19.810 "data_size": 0 00:23:19.810 }, 00:23:19.810 { 00:23:19.810 "name": "BaseBdev4", 00:23:19.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.810 "is_configured": false, 00:23:19.810 "data_offset": 0, 00:23:19.810 "data_size": 0 00:23:19.810 } 00:23:19.810 ] 00:23:19.810 }' 00:23:19.810 10:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.810 10:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.377 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:20.636 [2024-07-26 10:33:33.342364] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:20.636 [2024-07-26 10:33:33.342393] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f44d00 name Existed_Raid, state configuring 00:23:20.636 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:20.892 [2024-07-26 10:33:33.566980] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.892 [2024-07-26 10:33:33.567006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.892 [2024-07-26 10:33:33.567014] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:20.892 [2024-07-26 10:33:33.567025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:20.893 [2024-07-26 10:33:33.567033] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:20.893 [2024-07-26 10:33:33.567042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:20.893 [2024-07-26 10:33:33.567050] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:20.893 [2024-07-26 10:33:33.567060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:20.893 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:21.149 [2024-07-26 10:33:33.801208] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.149 BaseBdev1 00:23:21.149 10:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:21.149 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:21.150 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:21.150 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:21.150 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:21.150 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:21.150 10:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.150 10:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.407 [ 00:23:21.407 { 00:23:21.407 "name": "BaseBdev1", 00:23:21.407 "aliases": [ 00:23:21.407 "d673dc8a-1407-46f6-88ef-313fdaaaff9c" 00:23:21.407 ], 00:23:21.407 "product_name": "Malloc disk", 00:23:21.407 "block_size": 512, 00:23:21.407 "num_blocks": 65536, 00:23:21.407 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:21.407 "assigned_rate_limits": { 00:23:21.407 "rw_ios_per_sec": 0, 00:23:21.407 "rw_mbytes_per_sec": 0, 00:23:21.407 "r_mbytes_per_sec": 0, 00:23:21.407 "w_mbytes_per_sec": 0 00:23:21.407 }, 00:23:21.407 "claimed": true, 00:23:21.407 "claim_type": "exclusive_write", 00:23:21.407 "zoned": false, 00:23:21.407 "supported_io_types": { 00:23:21.407 "read": true, 00:23:21.407 "write": true, 00:23:21.407 "unmap": true, 00:23:21.407 "flush": true, 00:23:21.407 "reset": true, 00:23:21.407 "nvme_admin": false, 00:23:21.407 "nvme_io": false, 00:23:21.407 "nvme_io_md": false, 00:23:21.407 "write_zeroes": true, 00:23:21.407 "zcopy": true, 00:23:21.407 "get_zone_info": false, 00:23:21.407 "zone_management": false, 00:23:21.407 "zone_append": false, 00:23:21.407 "compare": false, 00:23:21.407 "compare_and_write": false, 00:23:21.407 "abort": true, 00:23:21.407 "seek_hole": false, 00:23:21.407 "seek_data": false, 00:23:21.407 "copy": true, 00:23:21.407 "nvme_iov_md": false 00:23:21.407 }, 00:23:21.407 "memory_domains": [ 00:23:21.407 { 00:23:21.407 "dma_device_id": "system", 00:23:21.407 "dma_device_type": 1 00:23:21.407 }, 00:23:21.407 { 00:23:21.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.407 "dma_device_type": 2 00:23:21.407 } 00:23:21.407 ], 00:23:21.407 "driver_specific": {} 00:23:21.407 } 00:23:21.407 ] 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:21.407 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.408 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.665 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.665 "name": "Existed_Raid", 00:23:21.665 "uuid": "1017dd94-a763-4a86-9915-50bf3d8a2464", 00:23:21.665 "strip_size_kb": 0, 00:23:21.665 "state": "configuring", 00:23:21.665 "raid_level": "raid1", 00:23:21.665 "superblock": true, 00:23:21.665 "num_base_bdevs": 4, 00:23:21.665 "num_base_bdevs_discovered": 1, 00:23:21.665 "num_base_bdevs_operational": 4, 00:23:21.665 "base_bdevs_list": [ 00:23:21.665 { 00:23:21.665 "name": "BaseBdev1", 00:23:21.665 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:21.665 "is_configured": true, 00:23:21.666 "data_offset": 2048, 00:23:21.666 "data_size": 63488 00:23:21.666 }, 00:23:21.666 { 00:23:21.666 "name": "BaseBdev2", 00:23:21.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.666 "is_configured": false, 00:23:21.666 "data_offset": 0, 00:23:21.666 "data_size": 0 00:23:21.666 }, 00:23:21.666 { 00:23:21.666 "name": "BaseBdev3", 00:23:21.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.666 "is_configured": false, 00:23:21.666 "data_offset": 0, 00:23:21.666 "data_size": 0 00:23:21.666 }, 00:23:21.666 { 00:23:21.666 "name": "BaseBdev4", 00:23:21.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.666 "is_configured": false, 00:23:21.666 "data_offset": 0, 00:23:21.666 "data_size": 0 00:23:21.666 } 00:23:21.666 ] 00:23:21.666 }' 00:23:21.666 10:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.666 10:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.231 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:22.489 [2024-07-26 10:33:35.244998] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.489 [2024-07-26 10:33:35.245035] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f44630 name Existed_Raid, state configuring 00:23:22.489 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:22.748 [2024-07-26 10:33:35.473637] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.748 [2024-07-26 10:33:35.474958] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.748 [2024-07-26 10:33:35.474992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.748 [2024-07-26 10:33:35.475001] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.748 [2024-07-26 10:33:35.475012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.748 [2024-07-26 10:33:35.475020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.748 [2024-07-26 10:33:35.475030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.748 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.007 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.007 "name": "Existed_Raid", 00:23:23.007 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:23.007 "strip_size_kb": 0, 00:23:23.007 "state": "configuring", 00:23:23.007 "raid_level": "raid1", 00:23:23.007 "superblock": true, 00:23:23.007 "num_base_bdevs": 4, 00:23:23.007 "num_base_bdevs_discovered": 1, 00:23:23.007 "num_base_bdevs_operational": 4, 00:23:23.007 "base_bdevs_list": [ 00:23:23.007 { 00:23:23.007 "name": "BaseBdev1", 00:23:23.007 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:23.007 "is_configured": true, 00:23:23.007 "data_offset": 2048, 00:23:23.007 "data_size": 63488 00:23:23.007 }, 00:23:23.007 { 00:23:23.007 "name": "BaseBdev2", 00:23:23.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.007 "is_configured": false, 00:23:23.007 "data_offset": 0, 00:23:23.007 "data_size": 0 00:23:23.007 }, 00:23:23.007 { 00:23:23.007 "name": "BaseBdev3", 00:23:23.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.007 "is_configured": false, 00:23:23.007 "data_offset": 0, 00:23:23.007 "data_size": 0 00:23:23.007 }, 00:23:23.007 { 00:23:23.007 "name": "BaseBdev4", 00:23:23.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.007 "is_configured": false, 00:23:23.007 "data_offset": 0, 00:23:23.007 "data_size": 0 00:23:23.007 } 00:23:23.007 ] 00:23:23.007 }' 00:23:23.007 10:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.007 10:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.573 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:23.831 [2024-07-26 10:33:36.495394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.831 BaseBdev2 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:23.831 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:24.089 [ 00:23:24.089 { 00:23:24.089 "name": "BaseBdev2", 00:23:24.089 "aliases": [ 00:23:24.089 "bd52736a-0860-49f5-a20f-8ea4295d09af" 00:23:24.089 ], 00:23:24.089 "product_name": "Malloc disk", 00:23:24.089 "block_size": 512, 00:23:24.089 "num_blocks": 65536, 00:23:24.089 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:24.089 "assigned_rate_limits": { 00:23:24.089 "rw_ios_per_sec": 0, 00:23:24.089 "rw_mbytes_per_sec": 0, 00:23:24.089 "r_mbytes_per_sec": 0, 00:23:24.089 "w_mbytes_per_sec": 0 00:23:24.089 }, 00:23:24.089 "claimed": true, 00:23:24.089 "claim_type": "exclusive_write", 00:23:24.089 "zoned": false, 00:23:24.089 "supported_io_types": { 00:23:24.089 "read": true, 00:23:24.089 "write": true, 00:23:24.089 "unmap": true, 00:23:24.089 "flush": true, 00:23:24.089 "reset": true, 00:23:24.089 "nvme_admin": false, 00:23:24.089 "nvme_io": false, 00:23:24.089 "nvme_io_md": false, 00:23:24.089 "write_zeroes": true, 00:23:24.089 "zcopy": true, 00:23:24.089 "get_zone_info": false, 00:23:24.089 "zone_management": false, 00:23:24.089 "zone_append": false, 00:23:24.089 "compare": false, 00:23:24.089 "compare_and_write": false, 00:23:24.089 "abort": true, 00:23:24.089 "seek_hole": false, 00:23:24.089 "seek_data": false, 00:23:24.089 "copy": true, 00:23:24.089 "nvme_iov_md": false 00:23:24.089 }, 00:23:24.089 "memory_domains": [ 00:23:24.089 { 00:23:24.089 "dma_device_id": "system", 00:23:24.089 "dma_device_type": 1 00:23:24.089 }, 00:23:24.089 { 00:23:24.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.089 "dma_device_type": 2 00:23:24.089 } 00:23:24.089 ], 00:23:24.089 "driver_specific": {} 00:23:24.089 } 00:23:24.089 ] 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.089 10:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.348 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.348 "name": "Existed_Raid", 00:23:24.348 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:24.348 "strip_size_kb": 0, 00:23:24.348 "state": "configuring", 00:23:24.348 "raid_level": "raid1", 00:23:24.348 "superblock": true, 00:23:24.348 "num_base_bdevs": 4, 00:23:24.348 "num_base_bdevs_discovered": 2, 00:23:24.348 "num_base_bdevs_operational": 4, 00:23:24.348 "base_bdevs_list": [ 00:23:24.348 { 00:23:24.348 "name": "BaseBdev1", 00:23:24.348 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:24.348 "is_configured": true, 00:23:24.348 "data_offset": 2048, 00:23:24.348 "data_size": 63488 00:23:24.348 }, 00:23:24.348 { 00:23:24.348 "name": "BaseBdev2", 00:23:24.348 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:24.348 "is_configured": true, 00:23:24.348 "data_offset": 2048, 00:23:24.348 "data_size": 63488 00:23:24.348 }, 00:23:24.348 { 00:23:24.348 "name": "BaseBdev3", 00:23:24.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.348 "is_configured": false, 00:23:24.348 "data_offset": 0, 00:23:24.348 "data_size": 0 00:23:24.348 }, 00:23:24.348 { 00:23:24.348 "name": "BaseBdev4", 00:23:24.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.348 "is_configured": false, 00:23:24.348 "data_offset": 0, 00:23:24.348 "data_size": 0 00:23:24.348 } 00:23:24.348 ] 00:23:24.348 }' 00:23:24.348 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.348 10:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.915 10:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:25.173 [2024-07-26 10:33:38.002568] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:25.173 BaseBdev3 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:25.173 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:25.431 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:25.689 [ 00:23:25.689 { 00:23:25.689 "name": "BaseBdev3", 00:23:25.689 "aliases": [ 00:23:25.689 "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef" 00:23:25.689 ], 00:23:25.689 "product_name": "Malloc disk", 00:23:25.689 "block_size": 512, 00:23:25.689 "num_blocks": 65536, 00:23:25.689 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:25.689 "assigned_rate_limits": { 00:23:25.689 "rw_ios_per_sec": 0, 00:23:25.689 "rw_mbytes_per_sec": 0, 00:23:25.689 "r_mbytes_per_sec": 0, 00:23:25.689 "w_mbytes_per_sec": 0 00:23:25.689 }, 00:23:25.689 "claimed": true, 00:23:25.689 "claim_type": "exclusive_write", 00:23:25.689 "zoned": false, 00:23:25.689 "supported_io_types": { 00:23:25.689 "read": true, 00:23:25.689 "write": true, 00:23:25.689 "unmap": true, 00:23:25.689 "flush": true, 00:23:25.689 "reset": true, 00:23:25.689 "nvme_admin": false, 00:23:25.689 "nvme_io": false, 00:23:25.689 "nvme_io_md": false, 00:23:25.689 "write_zeroes": true, 00:23:25.689 "zcopy": true, 00:23:25.689 "get_zone_info": false, 00:23:25.689 "zone_management": false, 00:23:25.689 "zone_append": false, 00:23:25.689 "compare": false, 00:23:25.689 "compare_and_write": false, 00:23:25.689 "abort": true, 00:23:25.689 "seek_hole": false, 00:23:25.689 "seek_data": false, 00:23:25.689 "copy": true, 00:23:25.689 "nvme_iov_md": false 00:23:25.689 }, 00:23:25.689 "memory_domains": [ 00:23:25.689 { 00:23:25.689 "dma_device_id": "system", 00:23:25.689 "dma_device_type": 1 00:23:25.689 }, 00:23:25.689 { 00:23:25.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.689 "dma_device_type": 2 00:23:25.689 } 00:23:25.689 ], 00:23:25.689 "driver_specific": {} 00:23:25.689 } 00:23:25.689 ] 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.689 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.948 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.948 "name": "Existed_Raid", 00:23:25.948 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:25.948 "strip_size_kb": 0, 00:23:25.948 "state": "configuring", 00:23:25.948 "raid_level": "raid1", 00:23:25.948 "superblock": true, 00:23:25.948 "num_base_bdevs": 4, 00:23:25.948 "num_base_bdevs_discovered": 3, 00:23:25.948 "num_base_bdevs_operational": 4, 00:23:25.948 "base_bdevs_list": [ 00:23:25.948 { 00:23:25.948 "name": "BaseBdev1", 00:23:25.948 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:25.948 "is_configured": true, 00:23:25.948 "data_offset": 2048, 00:23:25.948 "data_size": 63488 00:23:25.948 }, 00:23:25.948 { 00:23:25.948 "name": "BaseBdev2", 00:23:25.948 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:25.948 "is_configured": true, 00:23:25.948 "data_offset": 2048, 00:23:25.948 "data_size": 63488 00:23:25.948 }, 00:23:25.948 { 00:23:25.948 "name": "BaseBdev3", 00:23:25.948 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:25.948 "is_configured": true, 00:23:25.948 "data_offset": 2048, 00:23:25.948 "data_size": 63488 00:23:25.948 }, 00:23:25.948 { 00:23:25.948 "name": "BaseBdev4", 00:23:25.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.948 "is_configured": false, 00:23:25.948 "data_offset": 0, 00:23:25.948 "data_size": 0 00:23:25.948 } 00:23:25.948 ] 00:23:25.948 }' 00:23:25.948 10:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.948 10:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.514 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:26.772 [2024-07-26 10:33:39.437608] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.772 [2024-07-26 10:33:39.437766] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20f74f0 00:23:26.772 [2024-07-26 10:33:39.437778] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:26.772 [2024-07-26 10:33:39.437937] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f49c40 00:23:26.772 [2024-07-26 10:33:39.438058] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20f74f0 00:23:26.772 [2024-07-26 10:33:39.438067] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20f74f0 00:23:26.772 [2024-07-26 10:33:39.438168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.772 BaseBdev4 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.772 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:27.032 [ 00:23:27.032 { 00:23:27.032 "name": "BaseBdev4", 00:23:27.032 "aliases": [ 00:23:27.032 "b3302434-4ab5-417f-b01f-2099a133d89c" 00:23:27.032 ], 00:23:27.032 "product_name": "Malloc disk", 00:23:27.032 "block_size": 512, 00:23:27.032 "num_blocks": 65536, 00:23:27.032 "uuid": "b3302434-4ab5-417f-b01f-2099a133d89c", 00:23:27.032 "assigned_rate_limits": { 00:23:27.032 "rw_ios_per_sec": 0, 00:23:27.032 "rw_mbytes_per_sec": 0, 00:23:27.032 "r_mbytes_per_sec": 0, 00:23:27.032 "w_mbytes_per_sec": 0 00:23:27.032 }, 00:23:27.032 "claimed": true, 00:23:27.032 "claim_type": "exclusive_write", 00:23:27.032 "zoned": false, 00:23:27.032 "supported_io_types": { 00:23:27.032 "read": true, 00:23:27.032 "write": true, 00:23:27.032 "unmap": true, 00:23:27.032 "flush": true, 00:23:27.032 "reset": true, 00:23:27.032 "nvme_admin": false, 00:23:27.032 "nvme_io": false, 00:23:27.032 "nvme_io_md": false, 00:23:27.032 "write_zeroes": true, 00:23:27.032 "zcopy": true, 00:23:27.032 "get_zone_info": false, 00:23:27.032 "zone_management": false, 00:23:27.032 "zone_append": false, 00:23:27.032 "compare": false, 00:23:27.032 "compare_and_write": false, 00:23:27.032 "abort": true, 00:23:27.032 "seek_hole": false, 00:23:27.032 "seek_data": false, 00:23:27.032 "copy": true, 00:23:27.032 "nvme_iov_md": false 00:23:27.032 }, 00:23:27.032 "memory_domains": [ 00:23:27.032 { 00:23:27.032 "dma_device_id": "system", 00:23:27.032 "dma_device_type": 1 00:23:27.032 }, 00:23:27.032 { 00:23:27.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.032 "dma_device_type": 2 00:23:27.032 } 00:23:27.032 ], 00:23:27.032 "driver_specific": {} 00:23:27.032 } 00:23:27.032 ] 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.032 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.290 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.290 "name": "Existed_Raid", 00:23:27.290 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:27.290 "strip_size_kb": 0, 00:23:27.290 "state": "online", 00:23:27.290 "raid_level": "raid1", 00:23:27.290 "superblock": true, 00:23:27.290 "num_base_bdevs": 4, 00:23:27.290 "num_base_bdevs_discovered": 4, 00:23:27.290 "num_base_bdevs_operational": 4, 00:23:27.290 "base_bdevs_list": [ 00:23:27.290 { 00:23:27.290 "name": "BaseBdev1", 00:23:27.290 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:27.290 "is_configured": true, 00:23:27.290 "data_offset": 2048, 00:23:27.290 "data_size": 63488 00:23:27.290 }, 00:23:27.290 { 00:23:27.290 "name": "BaseBdev2", 00:23:27.290 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:27.290 "is_configured": true, 00:23:27.290 "data_offset": 2048, 00:23:27.290 "data_size": 63488 00:23:27.290 }, 00:23:27.290 { 00:23:27.290 "name": "BaseBdev3", 00:23:27.290 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:27.290 "is_configured": true, 00:23:27.290 "data_offset": 2048, 00:23:27.290 "data_size": 63488 00:23:27.290 }, 00:23:27.290 { 00:23:27.290 "name": "BaseBdev4", 00:23:27.290 "uuid": "b3302434-4ab5-417f-b01f-2099a133d89c", 00:23:27.290 "is_configured": true, 00:23:27.290 "data_offset": 2048, 00:23:27.290 "data_size": 63488 00:23:27.290 } 00:23:27.290 ] 00:23:27.290 }' 00:23:27.291 10:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.291 10:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:27.857 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:27.857 [2024-07-26 10:33:40.741566] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.115 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:28.115 "name": "Existed_Raid", 00:23:28.115 "aliases": [ 00:23:28.115 "b6909114-a951-484f-97c6-6aaafdacdcd2" 00:23:28.115 ], 00:23:28.115 "product_name": "Raid Volume", 00:23:28.115 "block_size": 512, 00:23:28.115 "num_blocks": 63488, 00:23:28.115 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:28.115 "assigned_rate_limits": { 00:23:28.115 "rw_ios_per_sec": 0, 00:23:28.115 "rw_mbytes_per_sec": 0, 00:23:28.115 "r_mbytes_per_sec": 0, 00:23:28.115 "w_mbytes_per_sec": 0 00:23:28.115 }, 00:23:28.115 "claimed": false, 00:23:28.115 "zoned": false, 00:23:28.115 "supported_io_types": { 00:23:28.115 "read": true, 00:23:28.115 "write": true, 00:23:28.115 "unmap": false, 00:23:28.115 "flush": false, 00:23:28.115 "reset": true, 00:23:28.115 "nvme_admin": false, 00:23:28.115 "nvme_io": false, 00:23:28.115 "nvme_io_md": false, 00:23:28.115 "write_zeroes": true, 00:23:28.115 "zcopy": false, 00:23:28.115 "get_zone_info": false, 00:23:28.115 "zone_management": false, 00:23:28.115 "zone_append": false, 00:23:28.115 "compare": false, 00:23:28.115 "compare_and_write": false, 00:23:28.115 "abort": false, 00:23:28.115 "seek_hole": false, 00:23:28.115 "seek_data": false, 00:23:28.115 "copy": false, 00:23:28.115 "nvme_iov_md": false 00:23:28.115 }, 00:23:28.115 "memory_domains": [ 00:23:28.115 { 00:23:28.115 "dma_device_id": "system", 00:23:28.115 "dma_device_type": 1 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.115 "dma_device_type": 2 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "system", 00:23:28.115 "dma_device_type": 1 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.115 "dma_device_type": 2 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "system", 00:23:28.115 "dma_device_type": 1 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.115 "dma_device_type": 2 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "system", 00:23:28.115 "dma_device_type": 1 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.115 "dma_device_type": 2 00:23:28.115 } 00:23:28.115 ], 00:23:28.115 "driver_specific": { 00:23:28.115 "raid": { 00:23:28.115 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:28.115 "strip_size_kb": 0, 00:23:28.115 "state": "online", 00:23:28.115 "raid_level": "raid1", 00:23:28.115 "superblock": true, 00:23:28.115 "num_base_bdevs": 4, 00:23:28.115 "num_base_bdevs_discovered": 4, 00:23:28.115 "num_base_bdevs_operational": 4, 00:23:28.115 "base_bdevs_list": [ 00:23:28.115 { 00:23:28.115 "name": "BaseBdev1", 00:23:28.115 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:28.115 "is_configured": true, 00:23:28.115 "data_offset": 2048, 00:23:28.115 "data_size": 63488 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "name": "BaseBdev2", 00:23:28.115 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:28.115 "is_configured": true, 00:23:28.115 "data_offset": 2048, 00:23:28.115 "data_size": 63488 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "name": "BaseBdev3", 00:23:28.115 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:28.115 "is_configured": true, 00:23:28.115 "data_offset": 2048, 00:23:28.115 "data_size": 63488 00:23:28.115 }, 00:23:28.115 { 00:23:28.115 "name": "BaseBdev4", 00:23:28.115 "uuid": "b3302434-4ab5-417f-b01f-2099a133d89c", 00:23:28.115 "is_configured": true, 00:23:28.115 "data_offset": 2048, 00:23:28.115 "data_size": 63488 00:23:28.115 } 00:23:28.115 ] 00:23:28.115 } 00:23:28.115 } 00:23:28.115 }' 00:23:28.115 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.115 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:28.115 BaseBdev2 00:23:28.116 BaseBdev3 00:23:28.116 BaseBdev4' 00:23:28.116 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.116 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.116 10:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.374 "name": "BaseBdev1", 00:23:28.374 "aliases": [ 00:23:28.374 "d673dc8a-1407-46f6-88ef-313fdaaaff9c" 00:23:28.374 ], 00:23:28.374 "product_name": "Malloc disk", 00:23:28.374 "block_size": 512, 00:23:28.374 "num_blocks": 65536, 00:23:28.374 "uuid": "d673dc8a-1407-46f6-88ef-313fdaaaff9c", 00:23:28.374 "assigned_rate_limits": { 00:23:28.374 "rw_ios_per_sec": 0, 00:23:28.374 "rw_mbytes_per_sec": 0, 00:23:28.374 "r_mbytes_per_sec": 0, 00:23:28.374 "w_mbytes_per_sec": 0 00:23:28.374 }, 00:23:28.374 "claimed": true, 00:23:28.374 "claim_type": "exclusive_write", 00:23:28.374 "zoned": false, 00:23:28.374 "supported_io_types": { 00:23:28.374 "read": true, 00:23:28.374 "write": true, 00:23:28.374 "unmap": true, 00:23:28.374 "flush": true, 00:23:28.374 "reset": true, 00:23:28.374 "nvme_admin": false, 00:23:28.374 "nvme_io": false, 00:23:28.374 "nvme_io_md": false, 00:23:28.374 "write_zeroes": true, 00:23:28.374 "zcopy": true, 00:23:28.374 "get_zone_info": false, 00:23:28.374 "zone_management": false, 00:23:28.374 "zone_append": false, 00:23:28.374 "compare": false, 00:23:28.374 "compare_and_write": false, 00:23:28.374 "abort": true, 00:23:28.374 "seek_hole": false, 00:23:28.374 "seek_data": false, 00:23:28.374 "copy": true, 00:23:28.374 "nvme_iov_md": false 00:23:28.374 }, 00:23:28.374 "memory_domains": [ 00:23:28.374 { 00:23:28.374 "dma_device_id": "system", 00:23:28.374 "dma_device_type": 1 00:23:28.374 }, 00:23:28.374 { 00:23:28.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.374 "dma_device_type": 2 00:23:28.374 } 00:23:28.374 ], 00:23:28.374 "driver_specific": {} 00:23:28.374 }' 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.374 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:28.632 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.890 "name": "BaseBdev2", 00:23:28.890 "aliases": [ 00:23:28.890 "bd52736a-0860-49f5-a20f-8ea4295d09af" 00:23:28.890 ], 00:23:28.890 "product_name": "Malloc disk", 00:23:28.890 "block_size": 512, 00:23:28.890 "num_blocks": 65536, 00:23:28.890 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:28.890 "assigned_rate_limits": { 00:23:28.890 "rw_ios_per_sec": 0, 00:23:28.890 "rw_mbytes_per_sec": 0, 00:23:28.890 "r_mbytes_per_sec": 0, 00:23:28.890 "w_mbytes_per_sec": 0 00:23:28.890 }, 00:23:28.890 "claimed": true, 00:23:28.890 "claim_type": "exclusive_write", 00:23:28.890 "zoned": false, 00:23:28.890 "supported_io_types": { 00:23:28.890 "read": true, 00:23:28.890 "write": true, 00:23:28.890 "unmap": true, 00:23:28.890 "flush": true, 00:23:28.890 "reset": true, 00:23:28.890 "nvme_admin": false, 00:23:28.890 "nvme_io": false, 00:23:28.890 "nvme_io_md": false, 00:23:28.890 "write_zeroes": true, 00:23:28.890 "zcopy": true, 00:23:28.890 "get_zone_info": false, 00:23:28.890 "zone_management": false, 00:23:28.890 "zone_append": false, 00:23:28.890 "compare": false, 00:23:28.890 "compare_and_write": false, 00:23:28.890 "abort": true, 00:23:28.890 "seek_hole": false, 00:23:28.890 "seek_data": false, 00:23:28.890 "copy": true, 00:23:28.890 "nvme_iov_md": false 00:23:28.890 }, 00:23:28.890 "memory_domains": [ 00:23:28.890 { 00:23:28.890 "dma_device_id": "system", 00:23:28.890 "dma_device_type": 1 00:23:28.890 }, 00:23:28.890 { 00:23:28.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.890 "dma_device_type": 2 00:23:28.890 } 00:23:28.890 ], 00:23:28.890 "driver_specific": {} 00:23:28.890 }' 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.890 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:29.149 10:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.407 "name": "BaseBdev3", 00:23:29.407 "aliases": [ 00:23:29.407 "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef" 00:23:29.407 ], 00:23:29.407 "product_name": "Malloc disk", 00:23:29.407 "block_size": 512, 00:23:29.407 "num_blocks": 65536, 00:23:29.407 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:29.407 "assigned_rate_limits": { 00:23:29.407 "rw_ios_per_sec": 0, 00:23:29.407 "rw_mbytes_per_sec": 0, 00:23:29.407 "r_mbytes_per_sec": 0, 00:23:29.407 "w_mbytes_per_sec": 0 00:23:29.407 }, 00:23:29.407 "claimed": true, 00:23:29.407 "claim_type": "exclusive_write", 00:23:29.407 "zoned": false, 00:23:29.407 "supported_io_types": { 00:23:29.407 "read": true, 00:23:29.407 "write": true, 00:23:29.407 "unmap": true, 00:23:29.407 "flush": true, 00:23:29.407 "reset": true, 00:23:29.407 "nvme_admin": false, 00:23:29.407 "nvme_io": false, 00:23:29.407 "nvme_io_md": false, 00:23:29.407 "write_zeroes": true, 00:23:29.407 "zcopy": true, 00:23:29.407 "get_zone_info": false, 00:23:29.407 "zone_management": false, 00:23:29.407 "zone_append": false, 00:23:29.407 "compare": false, 00:23:29.407 "compare_and_write": false, 00:23:29.407 "abort": true, 00:23:29.407 "seek_hole": false, 00:23:29.407 "seek_data": false, 00:23:29.407 "copy": true, 00:23:29.407 "nvme_iov_md": false 00:23:29.407 }, 00:23:29.407 "memory_domains": [ 00:23:29.407 { 00:23:29.407 "dma_device_id": "system", 00:23:29.407 "dma_device_type": 1 00:23:29.407 }, 00:23:29.407 { 00:23:29.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.407 "dma_device_type": 2 00:23:29.407 } 00:23:29.407 ], 00:23:29.407 "driver_specific": {} 00:23:29.407 }' 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.407 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.665 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.665 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.665 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.665 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:29.666 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.923 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.923 "name": "BaseBdev4", 00:23:29.923 "aliases": [ 00:23:29.923 "b3302434-4ab5-417f-b01f-2099a133d89c" 00:23:29.923 ], 00:23:29.923 "product_name": "Malloc disk", 00:23:29.924 "block_size": 512, 00:23:29.924 "num_blocks": 65536, 00:23:29.924 "uuid": "b3302434-4ab5-417f-b01f-2099a133d89c", 00:23:29.924 "assigned_rate_limits": { 00:23:29.924 "rw_ios_per_sec": 0, 00:23:29.924 "rw_mbytes_per_sec": 0, 00:23:29.924 "r_mbytes_per_sec": 0, 00:23:29.924 "w_mbytes_per_sec": 0 00:23:29.924 }, 00:23:29.924 "claimed": true, 00:23:29.924 "claim_type": "exclusive_write", 00:23:29.924 "zoned": false, 00:23:29.924 "supported_io_types": { 00:23:29.924 "read": true, 00:23:29.924 "write": true, 00:23:29.924 "unmap": true, 00:23:29.924 "flush": true, 00:23:29.924 "reset": true, 00:23:29.924 "nvme_admin": false, 00:23:29.924 "nvme_io": false, 00:23:29.924 "nvme_io_md": false, 00:23:29.924 "write_zeroes": true, 00:23:29.924 "zcopy": true, 00:23:29.924 "get_zone_info": false, 00:23:29.924 "zone_management": false, 00:23:29.924 "zone_append": false, 00:23:29.924 "compare": false, 00:23:29.924 "compare_and_write": false, 00:23:29.924 "abort": true, 00:23:29.924 "seek_hole": false, 00:23:29.924 "seek_data": false, 00:23:29.924 "copy": true, 00:23:29.924 "nvme_iov_md": false 00:23:29.924 }, 00:23:29.924 "memory_domains": [ 00:23:29.924 { 00:23:29.924 "dma_device_id": "system", 00:23:29.924 "dma_device_type": 1 00:23:29.924 }, 00:23:29.924 { 00:23:29.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.924 "dma_device_type": 2 00:23:29.924 } 00:23:29.924 ], 00:23:29.924 "driver_specific": {} 00:23:29.924 }' 00:23:29.924 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.924 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.182 10:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.182 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.182 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.182 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:30.440 [2024-07-26 10:33:43.304069] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.440 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.697 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.697 "name": "Existed_Raid", 00:23:30.697 "uuid": "b6909114-a951-484f-97c6-6aaafdacdcd2", 00:23:30.697 "strip_size_kb": 0, 00:23:30.697 "state": "online", 00:23:30.697 "raid_level": "raid1", 00:23:30.697 "superblock": true, 00:23:30.697 "num_base_bdevs": 4, 00:23:30.697 "num_base_bdevs_discovered": 3, 00:23:30.697 "num_base_bdevs_operational": 3, 00:23:30.697 "base_bdevs_list": [ 00:23:30.697 { 00:23:30.697 "name": null, 00:23:30.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.697 "is_configured": false, 00:23:30.697 "data_offset": 2048, 00:23:30.697 "data_size": 63488 00:23:30.697 }, 00:23:30.697 { 00:23:30.697 "name": "BaseBdev2", 00:23:30.697 "uuid": "bd52736a-0860-49f5-a20f-8ea4295d09af", 00:23:30.697 "is_configured": true, 00:23:30.697 "data_offset": 2048, 00:23:30.697 "data_size": 63488 00:23:30.697 }, 00:23:30.697 { 00:23:30.697 "name": "BaseBdev3", 00:23:30.697 "uuid": "c4a5d926-ccfa-4e7e-9f19-cd5286f6cfef", 00:23:30.697 "is_configured": true, 00:23:30.697 "data_offset": 2048, 00:23:30.697 "data_size": 63488 00:23:30.697 }, 00:23:30.697 { 00:23:30.697 "name": "BaseBdev4", 00:23:30.697 "uuid": "b3302434-4ab5-417f-b01f-2099a133d89c", 00:23:30.697 "is_configured": true, 00:23:30.697 "data_offset": 2048, 00:23:30.697 "data_size": 63488 00:23:30.697 } 00:23:30.697 ] 00:23:30.697 }' 00:23:30.697 10:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.697 10:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.261 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:31.261 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:31.261 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.261 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:31.519 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:31.519 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.519 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:31.777 [2024-07-26 10:33:44.576422] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:31.777 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:31.777 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:31.777 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.777 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:32.035 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:32.035 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.035 10:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:32.293 [2024-07-26 10:33:45.043915] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:32.293 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:32.293 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.293 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.293 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:32.551 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:32.551 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.551 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:32.809 [2024-07-26 10:33:45.507323] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:32.809 [2024-07-26 10:33:45.507396] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.809 [2024-07-26 10:33:45.517496] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.809 [2024-07-26 10:33:45.517523] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.809 [2024-07-26 10:33:45.517533] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f74f0 name Existed_Raid, state offline 00:23:32.809 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:32.809 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.809 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.809 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:33.067 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:33.325 BaseBdev2 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:33.325 10:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:33.325 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:33.583 [ 00:23:33.583 { 00:23:33.583 "name": "BaseBdev2", 00:23:33.583 "aliases": [ 00:23:33.583 "0b18c9fd-8356-4a23-8c31-e3125a433ce9" 00:23:33.583 ], 00:23:33.583 "product_name": "Malloc disk", 00:23:33.583 "block_size": 512, 00:23:33.583 "num_blocks": 65536, 00:23:33.583 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:33.583 "assigned_rate_limits": { 00:23:33.583 "rw_ios_per_sec": 0, 00:23:33.583 "rw_mbytes_per_sec": 0, 00:23:33.583 "r_mbytes_per_sec": 0, 00:23:33.584 "w_mbytes_per_sec": 0 00:23:33.584 }, 00:23:33.584 "claimed": false, 00:23:33.584 "zoned": false, 00:23:33.584 "supported_io_types": { 00:23:33.584 "read": true, 00:23:33.584 "write": true, 00:23:33.584 "unmap": true, 00:23:33.584 "flush": true, 00:23:33.584 "reset": true, 00:23:33.584 "nvme_admin": false, 00:23:33.584 "nvme_io": false, 00:23:33.584 "nvme_io_md": false, 00:23:33.584 "write_zeroes": true, 00:23:33.584 "zcopy": true, 00:23:33.584 "get_zone_info": false, 00:23:33.584 "zone_management": false, 00:23:33.584 "zone_append": false, 00:23:33.584 "compare": false, 00:23:33.584 "compare_and_write": false, 00:23:33.584 "abort": true, 00:23:33.584 "seek_hole": false, 00:23:33.584 "seek_data": false, 00:23:33.584 "copy": true, 00:23:33.584 "nvme_iov_md": false 00:23:33.584 }, 00:23:33.584 "memory_domains": [ 00:23:33.584 { 00:23:33.584 "dma_device_id": "system", 00:23:33.584 "dma_device_type": 1 00:23:33.584 }, 00:23:33.584 { 00:23:33.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.584 "dma_device_type": 2 00:23:33.584 } 00:23:33.584 ], 00:23:33.584 "driver_specific": {} 00:23:33.584 } 00:23:33.584 ] 00:23:33.584 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:33.584 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:33.584 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:33.584 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:33.842 BaseBdev3 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:33.842 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.101 10:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.359 [ 00:23:34.359 { 00:23:34.359 "name": "BaseBdev3", 00:23:34.359 "aliases": [ 00:23:34.359 "30f9927b-c035-491c-a53f-8076c3b4f047" 00:23:34.359 ], 00:23:34.359 "product_name": "Malloc disk", 00:23:34.359 "block_size": 512, 00:23:34.359 "num_blocks": 65536, 00:23:34.359 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:34.359 "assigned_rate_limits": { 00:23:34.359 "rw_ios_per_sec": 0, 00:23:34.359 "rw_mbytes_per_sec": 0, 00:23:34.359 "r_mbytes_per_sec": 0, 00:23:34.359 "w_mbytes_per_sec": 0 00:23:34.359 }, 00:23:34.359 "claimed": false, 00:23:34.359 "zoned": false, 00:23:34.359 "supported_io_types": { 00:23:34.359 "read": true, 00:23:34.359 "write": true, 00:23:34.359 "unmap": true, 00:23:34.359 "flush": true, 00:23:34.359 "reset": true, 00:23:34.359 "nvme_admin": false, 00:23:34.359 "nvme_io": false, 00:23:34.359 "nvme_io_md": false, 00:23:34.359 "write_zeroes": true, 00:23:34.359 "zcopy": true, 00:23:34.359 "get_zone_info": false, 00:23:34.359 "zone_management": false, 00:23:34.359 "zone_append": false, 00:23:34.359 "compare": false, 00:23:34.359 "compare_and_write": false, 00:23:34.359 "abort": true, 00:23:34.359 "seek_hole": false, 00:23:34.359 "seek_data": false, 00:23:34.359 "copy": true, 00:23:34.359 "nvme_iov_md": false 00:23:34.359 }, 00:23:34.359 "memory_domains": [ 00:23:34.359 { 00:23:34.359 "dma_device_id": "system", 00:23:34.359 "dma_device_type": 1 00:23:34.359 }, 00:23:34.359 { 00:23:34.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.359 "dma_device_type": 2 00:23:34.359 } 00:23:34.359 ], 00:23:34.359 "driver_specific": {} 00:23:34.359 } 00:23:34.359 ] 00:23:34.359 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:34.359 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:34.359 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.359 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:34.618 BaseBdev4 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:34.618 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.876 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:35.134 [ 00:23:35.134 { 00:23:35.134 "name": "BaseBdev4", 00:23:35.134 "aliases": [ 00:23:35.134 "7ea3698a-47c2-4134-9b33-b6b23feba1f3" 00:23:35.134 ], 00:23:35.134 "product_name": "Malloc disk", 00:23:35.134 "block_size": 512, 00:23:35.134 "num_blocks": 65536, 00:23:35.134 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:35.134 "assigned_rate_limits": { 00:23:35.134 "rw_ios_per_sec": 0, 00:23:35.134 "rw_mbytes_per_sec": 0, 00:23:35.134 "r_mbytes_per_sec": 0, 00:23:35.134 "w_mbytes_per_sec": 0 00:23:35.134 }, 00:23:35.134 "claimed": false, 00:23:35.134 "zoned": false, 00:23:35.134 "supported_io_types": { 00:23:35.134 "read": true, 00:23:35.134 "write": true, 00:23:35.134 "unmap": true, 00:23:35.134 "flush": true, 00:23:35.134 "reset": true, 00:23:35.135 "nvme_admin": false, 00:23:35.135 "nvme_io": false, 00:23:35.135 "nvme_io_md": false, 00:23:35.135 "write_zeroes": true, 00:23:35.135 "zcopy": true, 00:23:35.135 "get_zone_info": false, 00:23:35.135 "zone_management": false, 00:23:35.135 "zone_append": false, 00:23:35.135 "compare": false, 00:23:35.135 "compare_and_write": false, 00:23:35.135 "abort": true, 00:23:35.135 "seek_hole": false, 00:23:35.135 "seek_data": false, 00:23:35.135 "copy": true, 00:23:35.135 "nvme_iov_md": false 00:23:35.135 }, 00:23:35.135 "memory_domains": [ 00:23:35.135 { 00:23:35.135 "dma_device_id": "system", 00:23:35.135 "dma_device_type": 1 00:23:35.135 }, 00:23:35.135 { 00:23:35.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.135 "dma_device_type": 2 00:23:35.135 } 00:23:35.135 ], 00:23:35.135 "driver_specific": {} 00:23:35.135 } 00:23:35.135 ] 00:23:35.135 10:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:35.135 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.135 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.135 10:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:35.135 [2024-07-26 10:33:48.008672] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.135 [2024-07-26 10:33:48.008709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.135 [2024-07-26 10:33:48.008729] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.135 [2024-07-26 10:33:48.009931] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.135 [2024-07-26 10:33:48.009970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.135 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.394 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.394 "name": "Existed_Raid", 00:23:35.394 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:35.394 "strip_size_kb": 0, 00:23:35.394 "state": "configuring", 00:23:35.394 "raid_level": "raid1", 00:23:35.394 "superblock": true, 00:23:35.394 "num_base_bdevs": 4, 00:23:35.394 "num_base_bdevs_discovered": 3, 00:23:35.394 "num_base_bdevs_operational": 4, 00:23:35.394 "base_bdevs_list": [ 00:23:35.394 { 00:23:35.394 "name": "BaseBdev1", 00:23:35.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.394 "is_configured": false, 00:23:35.394 "data_offset": 0, 00:23:35.394 "data_size": 0 00:23:35.394 }, 00:23:35.394 { 00:23:35.394 "name": "BaseBdev2", 00:23:35.394 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:35.394 "is_configured": true, 00:23:35.394 "data_offset": 2048, 00:23:35.394 "data_size": 63488 00:23:35.394 }, 00:23:35.394 { 00:23:35.394 "name": "BaseBdev3", 00:23:35.394 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:35.394 "is_configured": true, 00:23:35.394 "data_offset": 2048, 00:23:35.394 "data_size": 63488 00:23:35.394 }, 00:23:35.394 { 00:23:35.394 "name": "BaseBdev4", 00:23:35.394 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:35.394 "is_configured": true, 00:23:35.394 "data_offset": 2048, 00:23:35.394 "data_size": 63488 00:23:35.394 } 00:23:35.394 ] 00:23:35.394 }' 00:23:35.394 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.394 10:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.961 10:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:36.219 [2024-07-26 10:33:49.047384] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.219 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.477 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.477 "name": "Existed_Raid", 00:23:36.477 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:36.477 "strip_size_kb": 0, 00:23:36.477 "state": "configuring", 00:23:36.477 "raid_level": "raid1", 00:23:36.477 "superblock": true, 00:23:36.477 "num_base_bdevs": 4, 00:23:36.477 "num_base_bdevs_discovered": 2, 00:23:36.477 "num_base_bdevs_operational": 4, 00:23:36.477 "base_bdevs_list": [ 00:23:36.477 { 00:23:36.477 "name": "BaseBdev1", 00:23:36.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.477 "is_configured": false, 00:23:36.477 "data_offset": 0, 00:23:36.477 "data_size": 0 00:23:36.477 }, 00:23:36.477 { 00:23:36.477 "name": null, 00:23:36.477 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:36.477 "is_configured": false, 00:23:36.477 "data_offset": 2048, 00:23:36.477 "data_size": 63488 00:23:36.477 }, 00:23:36.477 { 00:23:36.477 "name": "BaseBdev3", 00:23:36.477 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:36.477 "is_configured": true, 00:23:36.477 "data_offset": 2048, 00:23:36.477 "data_size": 63488 00:23:36.477 }, 00:23:36.477 { 00:23:36.477 "name": "BaseBdev4", 00:23:36.477 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:36.477 "is_configured": true, 00:23:36.477 "data_offset": 2048, 00:23:36.477 "data_size": 63488 00:23:36.477 } 00:23:36.477 ] 00:23:36.477 }' 00:23:36.477 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.477 10:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.044 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.044 10:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:37.302 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:37.302 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:37.560 [2024-07-26 10:33:50.313760] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:37.560 BaseBdev1 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:37.560 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:37.821 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:38.078 [ 00:23:38.078 { 00:23:38.078 "name": "BaseBdev1", 00:23:38.078 "aliases": [ 00:23:38.078 "17f629fd-9fe5-4166-88b7-6345d3eff87c" 00:23:38.078 ], 00:23:38.078 "product_name": "Malloc disk", 00:23:38.078 "block_size": 512, 00:23:38.078 "num_blocks": 65536, 00:23:38.078 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:38.078 "assigned_rate_limits": { 00:23:38.078 "rw_ios_per_sec": 0, 00:23:38.078 "rw_mbytes_per_sec": 0, 00:23:38.078 "r_mbytes_per_sec": 0, 00:23:38.078 "w_mbytes_per_sec": 0 00:23:38.078 }, 00:23:38.078 "claimed": true, 00:23:38.078 "claim_type": "exclusive_write", 00:23:38.078 "zoned": false, 00:23:38.078 "supported_io_types": { 00:23:38.078 "read": true, 00:23:38.078 "write": true, 00:23:38.078 "unmap": true, 00:23:38.078 "flush": true, 00:23:38.078 "reset": true, 00:23:38.078 "nvme_admin": false, 00:23:38.078 "nvme_io": false, 00:23:38.078 "nvme_io_md": false, 00:23:38.078 "write_zeroes": true, 00:23:38.078 "zcopy": true, 00:23:38.078 "get_zone_info": false, 00:23:38.078 "zone_management": false, 00:23:38.078 "zone_append": false, 00:23:38.078 "compare": false, 00:23:38.078 "compare_and_write": false, 00:23:38.078 "abort": true, 00:23:38.078 "seek_hole": false, 00:23:38.078 "seek_data": false, 00:23:38.078 "copy": true, 00:23:38.078 "nvme_iov_md": false 00:23:38.078 }, 00:23:38.078 "memory_domains": [ 00:23:38.079 { 00:23:38.079 "dma_device_id": "system", 00:23:38.079 "dma_device_type": 1 00:23:38.079 }, 00:23:38.079 { 00:23:38.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.079 "dma_device_type": 2 00:23:38.079 } 00:23:38.079 ], 00:23:38.079 "driver_specific": {} 00:23:38.079 } 00:23:38.079 ] 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.079 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.335 10:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.335 "name": "Existed_Raid", 00:23:38.335 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:38.335 "strip_size_kb": 0, 00:23:38.335 "state": "configuring", 00:23:38.335 "raid_level": "raid1", 00:23:38.335 "superblock": true, 00:23:38.335 "num_base_bdevs": 4, 00:23:38.335 "num_base_bdevs_discovered": 3, 00:23:38.335 "num_base_bdevs_operational": 4, 00:23:38.335 "base_bdevs_list": [ 00:23:38.335 { 00:23:38.335 "name": "BaseBdev1", 00:23:38.335 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:38.335 "is_configured": true, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 }, 00:23:38.335 { 00:23:38.335 "name": null, 00:23:38.335 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:38.335 "is_configured": false, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 }, 00:23:38.335 { 00:23:38.335 "name": "BaseBdev3", 00:23:38.335 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:38.335 "is_configured": true, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 }, 00:23:38.335 { 00:23:38.335 "name": "BaseBdev4", 00:23:38.335 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:38.335 "is_configured": true, 00:23:38.335 "data_offset": 2048, 00:23:38.335 "data_size": 63488 00:23:38.335 } 00:23:38.335 ] 00:23:38.335 }' 00:23:38.335 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.335 10:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.934 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.934 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:38.934 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:38.934 10:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:39.191 [2024-07-26 10:33:52.018282] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.191 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.449 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.449 "name": "Existed_Raid", 00:23:39.449 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:39.449 "strip_size_kb": 0, 00:23:39.449 "state": "configuring", 00:23:39.449 "raid_level": "raid1", 00:23:39.449 "superblock": true, 00:23:39.449 "num_base_bdevs": 4, 00:23:39.449 "num_base_bdevs_discovered": 2, 00:23:39.449 "num_base_bdevs_operational": 4, 00:23:39.449 "base_bdevs_list": [ 00:23:39.449 { 00:23:39.449 "name": "BaseBdev1", 00:23:39.449 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:39.449 "is_configured": true, 00:23:39.450 "data_offset": 2048, 00:23:39.450 "data_size": 63488 00:23:39.450 }, 00:23:39.450 { 00:23:39.450 "name": null, 00:23:39.450 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:39.450 "is_configured": false, 00:23:39.450 "data_offset": 2048, 00:23:39.450 "data_size": 63488 00:23:39.450 }, 00:23:39.450 { 00:23:39.450 "name": null, 00:23:39.450 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:39.450 "is_configured": false, 00:23:39.450 "data_offset": 2048, 00:23:39.450 "data_size": 63488 00:23:39.450 }, 00:23:39.450 { 00:23:39.450 "name": "BaseBdev4", 00:23:39.450 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:39.450 "is_configured": true, 00:23:39.450 "data_offset": 2048, 00:23:39.450 "data_size": 63488 00:23:39.450 } 00:23:39.450 ] 00:23:39.450 }' 00:23:39.450 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.450 10:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.015 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.015 10:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:40.273 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:40.273 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:40.531 [2024-07-26 10:33:53.285769] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.531 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.787 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.787 "name": "Existed_Raid", 00:23:40.787 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:40.787 "strip_size_kb": 0, 00:23:40.787 "state": "configuring", 00:23:40.787 "raid_level": "raid1", 00:23:40.787 "superblock": true, 00:23:40.787 "num_base_bdevs": 4, 00:23:40.787 "num_base_bdevs_discovered": 3, 00:23:40.787 "num_base_bdevs_operational": 4, 00:23:40.787 "base_bdevs_list": [ 00:23:40.787 { 00:23:40.787 "name": "BaseBdev1", 00:23:40.787 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:40.787 "is_configured": true, 00:23:40.787 "data_offset": 2048, 00:23:40.787 "data_size": 63488 00:23:40.787 }, 00:23:40.787 { 00:23:40.787 "name": null, 00:23:40.787 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:40.787 "is_configured": false, 00:23:40.787 "data_offset": 2048, 00:23:40.787 "data_size": 63488 00:23:40.787 }, 00:23:40.787 { 00:23:40.787 "name": "BaseBdev3", 00:23:40.787 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:40.787 "is_configured": true, 00:23:40.787 "data_offset": 2048, 00:23:40.787 "data_size": 63488 00:23:40.787 }, 00:23:40.787 { 00:23:40.787 "name": "BaseBdev4", 00:23:40.787 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:40.787 "is_configured": true, 00:23:40.787 "data_offset": 2048, 00:23:40.787 "data_size": 63488 00:23:40.787 } 00:23:40.787 ] 00:23:40.787 }' 00:23:40.787 10:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.788 10:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.352 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.352 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:41.609 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:41.609 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:41.867 [2024-07-26 10:33:54.569162] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.867 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.124 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.124 "name": "Existed_Raid", 00:23:42.124 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:42.124 "strip_size_kb": 0, 00:23:42.125 "state": "configuring", 00:23:42.125 "raid_level": "raid1", 00:23:42.125 "superblock": true, 00:23:42.125 "num_base_bdevs": 4, 00:23:42.125 "num_base_bdevs_discovered": 2, 00:23:42.125 "num_base_bdevs_operational": 4, 00:23:42.125 "base_bdevs_list": [ 00:23:42.125 { 00:23:42.125 "name": null, 00:23:42.125 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:42.125 "is_configured": false, 00:23:42.125 "data_offset": 2048, 00:23:42.125 "data_size": 63488 00:23:42.125 }, 00:23:42.125 { 00:23:42.125 "name": null, 00:23:42.125 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:42.125 "is_configured": false, 00:23:42.125 "data_offset": 2048, 00:23:42.125 "data_size": 63488 00:23:42.125 }, 00:23:42.125 { 00:23:42.125 "name": "BaseBdev3", 00:23:42.125 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:42.125 "is_configured": true, 00:23:42.125 "data_offset": 2048, 00:23:42.125 "data_size": 63488 00:23:42.125 }, 00:23:42.125 { 00:23:42.125 "name": "BaseBdev4", 00:23:42.125 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:42.125 "is_configured": true, 00:23:42.125 "data_offset": 2048, 00:23:42.125 "data_size": 63488 00:23:42.125 } 00:23:42.125 ] 00:23:42.125 }' 00:23:42.125 10:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.125 10:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.690 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.690 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:42.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:42.947 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:42.947 [2024-07-26 10:33:55.842418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.205 10:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.205 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.205 "name": "Existed_Raid", 00:23:43.205 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:43.205 "strip_size_kb": 0, 00:23:43.205 "state": "configuring", 00:23:43.205 "raid_level": "raid1", 00:23:43.205 "superblock": true, 00:23:43.205 "num_base_bdevs": 4, 00:23:43.205 "num_base_bdevs_discovered": 3, 00:23:43.205 "num_base_bdevs_operational": 4, 00:23:43.205 "base_bdevs_list": [ 00:23:43.205 { 00:23:43.205 "name": null, 00:23:43.205 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:43.205 "is_configured": false, 00:23:43.205 "data_offset": 2048, 00:23:43.205 "data_size": 63488 00:23:43.205 }, 00:23:43.205 { 00:23:43.205 "name": "BaseBdev2", 00:23:43.205 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:43.205 "is_configured": true, 00:23:43.205 "data_offset": 2048, 00:23:43.205 "data_size": 63488 00:23:43.205 }, 00:23:43.205 { 00:23:43.205 "name": "BaseBdev3", 00:23:43.205 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:43.205 "is_configured": true, 00:23:43.205 "data_offset": 2048, 00:23:43.205 "data_size": 63488 00:23:43.205 }, 00:23:43.205 { 00:23:43.205 "name": "BaseBdev4", 00:23:43.205 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:43.205 "is_configured": true, 00:23:43.205 "data_offset": 2048, 00:23:43.205 "data_size": 63488 00:23:43.205 } 00:23:43.205 ] 00:23:43.205 }' 00:23:43.205 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.205 10:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.772 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.772 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:44.030 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:44.030 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.030 10:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:44.288 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 17f629fd-9fe5-4166-88b7-6345d3eff87c 00:23:44.547 [2024-07-26 10:33:57.345474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:44.547 [2024-07-26 10:33:57.345617] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f4a640 00:23:44.547 [2024-07-26 10:33:57.345629] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.547 [2024-07-26 10:33:57.345791] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f47d00 00:23:44.547 [2024-07-26 10:33:57.345906] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f4a640 00:23:44.547 [2024-07-26 10:33:57.345915] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f4a640 00:23:44.547 [2024-07-26 10:33:57.346004] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.547 NewBaseBdev 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:44.547 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.804 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:45.063 [ 00:23:45.063 { 00:23:45.063 "name": "NewBaseBdev", 00:23:45.063 "aliases": [ 00:23:45.063 "17f629fd-9fe5-4166-88b7-6345d3eff87c" 00:23:45.063 ], 00:23:45.063 "product_name": "Malloc disk", 00:23:45.063 "block_size": 512, 00:23:45.063 "num_blocks": 65536, 00:23:45.063 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:45.063 "assigned_rate_limits": { 00:23:45.063 "rw_ios_per_sec": 0, 00:23:45.063 "rw_mbytes_per_sec": 0, 00:23:45.063 "r_mbytes_per_sec": 0, 00:23:45.063 "w_mbytes_per_sec": 0 00:23:45.063 }, 00:23:45.063 "claimed": true, 00:23:45.063 "claim_type": "exclusive_write", 00:23:45.063 "zoned": false, 00:23:45.063 "supported_io_types": { 00:23:45.063 "read": true, 00:23:45.063 "write": true, 00:23:45.063 "unmap": true, 00:23:45.063 "flush": true, 00:23:45.063 "reset": true, 00:23:45.063 "nvme_admin": false, 00:23:45.063 "nvme_io": false, 00:23:45.063 "nvme_io_md": false, 00:23:45.063 "write_zeroes": true, 00:23:45.063 "zcopy": true, 00:23:45.063 "get_zone_info": false, 00:23:45.063 "zone_management": false, 00:23:45.063 "zone_append": false, 00:23:45.063 "compare": false, 00:23:45.063 "compare_and_write": false, 00:23:45.063 "abort": true, 00:23:45.063 "seek_hole": false, 00:23:45.063 "seek_data": false, 00:23:45.063 "copy": true, 00:23:45.063 "nvme_iov_md": false 00:23:45.063 }, 00:23:45.063 "memory_domains": [ 00:23:45.063 { 00:23:45.063 "dma_device_id": "system", 00:23:45.063 "dma_device_type": 1 00:23:45.063 }, 00:23:45.063 { 00:23:45.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.063 "dma_device_type": 2 00:23:45.063 } 00:23:45.063 ], 00:23:45.063 "driver_specific": {} 00:23:45.063 } 00:23:45.063 ] 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.063 10:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.321 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.321 "name": "Existed_Raid", 00:23:45.321 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:45.321 "strip_size_kb": 0, 00:23:45.321 "state": "online", 00:23:45.321 "raid_level": "raid1", 00:23:45.321 "superblock": true, 00:23:45.321 "num_base_bdevs": 4, 00:23:45.321 "num_base_bdevs_discovered": 4, 00:23:45.321 "num_base_bdevs_operational": 4, 00:23:45.321 "base_bdevs_list": [ 00:23:45.321 { 00:23:45.321 "name": "NewBaseBdev", 00:23:45.321 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:45.321 "is_configured": true, 00:23:45.321 "data_offset": 2048, 00:23:45.321 "data_size": 63488 00:23:45.321 }, 00:23:45.321 { 00:23:45.321 "name": "BaseBdev2", 00:23:45.321 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:45.321 "is_configured": true, 00:23:45.321 "data_offset": 2048, 00:23:45.321 "data_size": 63488 00:23:45.321 }, 00:23:45.321 { 00:23:45.321 "name": "BaseBdev3", 00:23:45.321 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:45.321 "is_configured": true, 00:23:45.321 "data_offset": 2048, 00:23:45.321 "data_size": 63488 00:23:45.321 }, 00:23:45.321 { 00:23:45.321 "name": "BaseBdev4", 00:23:45.321 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:45.321 "is_configured": true, 00:23:45.321 "data_offset": 2048, 00:23:45.321 "data_size": 63488 00:23:45.321 } 00:23:45.321 ] 00:23:45.321 }' 00:23:45.321 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.321 10:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.886 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.886 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:45.887 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:46.145 [2024-07-26 10:33:58.829702] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:46.145 "name": "Existed_Raid", 00:23:46.145 "aliases": [ 00:23:46.145 "289dad93-590a-4f95-b369-12066ae80bac" 00:23:46.145 ], 00:23:46.145 "product_name": "Raid Volume", 00:23:46.145 "block_size": 512, 00:23:46.145 "num_blocks": 63488, 00:23:46.145 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:46.145 "assigned_rate_limits": { 00:23:46.145 "rw_ios_per_sec": 0, 00:23:46.145 "rw_mbytes_per_sec": 0, 00:23:46.145 "r_mbytes_per_sec": 0, 00:23:46.145 "w_mbytes_per_sec": 0 00:23:46.145 }, 00:23:46.145 "claimed": false, 00:23:46.145 "zoned": false, 00:23:46.145 "supported_io_types": { 00:23:46.145 "read": true, 00:23:46.145 "write": true, 00:23:46.145 "unmap": false, 00:23:46.145 "flush": false, 00:23:46.145 "reset": true, 00:23:46.145 "nvme_admin": false, 00:23:46.145 "nvme_io": false, 00:23:46.145 "nvme_io_md": false, 00:23:46.145 "write_zeroes": true, 00:23:46.145 "zcopy": false, 00:23:46.145 "get_zone_info": false, 00:23:46.145 "zone_management": false, 00:23:46.145 "zone_append": false, 00:23:46.145 "compare": false, 00:23:46.145 "compare_and_write": false, 00:23:46.145 "abort": false, 00:23:46.145 "seek_hole": false, 00:23:46.145 "seek_data": false, 00:23:46.145 "copy": false, 00:23:46.145 "nvme_iov_md": false 00:23:46.145 }, 00:23:46.145 "memory_domains": [ 00:23:46.145 { 00:23:46.145 "dma_device_id": "system", 00:23:46.145 "dma_device_type": 1 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.145 "dma_device_type": 2 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "system", 00:23:46.145 "dma_device_type": 1 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.145 "dma_device_type": 2 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "system", 00:23:46.145 "dma_device_type": 1 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.145 "dma_device_type": 2 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "system", 00:23:46.145 "dma_device_type": 1 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.145 "dma_device_type": 2 00:23:46.145 } 00:23:46.145 ], 00:23:46.145 "driver_specific": { 00:23:46.145 "raid": { 00:23:46.145 "uuid": "289dad93-590a-4f95-b369-12066ae80bac", 00:23:46.145 "strip_size_kb": 0, 00:23:46.145 "state": "online", 00:23:46.145 "raid_level": "raid1", 00:23:46.145 "superblock": true, 00:23:46.145 "num_base_bdevs": 4, 00:23:46.145 "num_base_bdevs_discovered": 4, 00:23:46.145 "num_base_bdevs_operational": 4, 00:23:46.145 "base_bdevs_list": [ 00:23:46.145 { 00:23:46.145 "name": "NewBaseBdev", 00:23:46.145 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:46.145 "is_configured": true, 00:23:46.145 "data_offset": 2048, 00:23:46.145 "data_size": 63488 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "name": "BaseBdev2", 00:23:46.145 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:46.145 "is_configured": true, 00:23:46.145 "data_offset": 2048, 00:23:46.145 "data_size": 63488 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "name": "BaseBdev3", 00:23:46.145 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:46.145 "is_configured": true, 00:23:46.145 "data_offset": 2048, 00:23:46.145 "data_size": 63488 00:23:46.145 }, 00:23:46.145 { 00:23:46.145 "name": "BaseBdev4", 00:23:46.145 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:46.145 "is_configured": true, 00:23:46.145 "data_offset": 2048, 00:23:46.145 "data_size": 63488 00:23:46.145 } 00:23:46.145 ] 00:23:46.145 } 00:23:46.145 } 00:23:46.145 }' 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:46.145 BaseBdev2 00:23:46.145 BaseBdev3 00:23:46.145 BaseBdev4' 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:46.145 10:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.404 "name": "NewBaseBdev", 00:23:46.404 "aliases": [ 00:23:46.404 "17f629fd-9fe5-4166-88b7-6345d3eff87c" 00:23:46.404 ], 00:23:46.404 "product_name": "Malloc disk", 00:23:46.404 "block_size": 512, 00:23:46.404 "num_blocks": 65536, 00:23:46.404 "uuid": "17f629fd-9fe5-4166-88b7-6345d3eff87c", 00:23:46.404 "assigned_rate_limits": { 00:23:46.404 "rw_ios_per_sec": 0, 00:23:46.404 "rw_mbytes_per_sec": 0, 00:23:46.404 "r_mbytes_per_sec": 0, 00:23:46.404 "w_mbytes_per_sec": 0 00:23:46.404 }, 00:23:46.404 "claimed": true, 00:23:46.404 "claim_type": "exclusive_write", 00:23:46.404 "zoned": false, 00:23:46.404 "supported_io_types": { 00:23:46.404 "read": true, 00:23:46.404 "write": true, 00:23:46.404 "unmap": true, 00:23:46.404 "flush": true, 00:23:46.404 "reset": true, 00:23:46.404 "nvme_admin": false, 00:23:46.404 "nvme_io": false, 00:23:46.404 "nvme_io_md": false, 00:23:46.404 "write_zeroes": true, 00:23:46.404 "zcopy": true, 00:23:46.404 "get_zone_info": false, 00:23:46.404 "zone_management": false, 00:23:46.404 "zone_append": false, 00:23:46.404 "compare": false, 00:23:46.404 "compare_and_write": false, 00:23:46.404 "abort": true, 00:23:46.404 "seek_hole": false, 00:23:46.404 "seek_data": false, 00:23:46.404 "copy": true, 00:23:46.404 "nvme_iov_md": false 00:23:46.404 }, 00:23:46.404 "memory_domains": [ 00:23:46.404 { 00:23:46.404 "dma_device_id": "system", 00:23:46.404 "dma_device_type": 1 00:23:46.404 }, 00:23:46.404 { 00:23:46.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.404 "dma_device_type": 2 00:23:46.404 } 00:23:46.404 ], 00:23:46.404 "driver_specific": {} 00:23:46.404 }' 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.404 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:46.662 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.920 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.920 "name": "BaseBdev2", 00:23:46.920 "aliases": [ 00:23:46.920 "0b18c9fd-8356-4a23-8c31-e3125a433ce9" 00:23:46.920 ], 00:23:46.920 "product_name": "Malloc disk", 00:23:46.920 "block_size": 512, 00:23:46.920 "num_blocks": 65536, 00:23:46.920 "uuid": "0b18c9fd-8356-4a23-8c31-e3125a433ce9", 00:23:46.920 "assigned_rate_limits": { 00:23:46.920 "rw_ios_per_sec": 0, 00:23:46.920 "rw_mbytes_per_sec": 0, 00:23:46.920 "r_mbytes_per_sec": 0, 00:23:46.920 "w_mbytes_per_sec": 0 00:23:46.920 }, 00:23:46.920 "claimed": true, 00:23:46.920 "claim_type": "exclusive_write", 00:23:46.920 "zoned": false, 00:23:46.920 "supported_io_types": { 00:23:46.920 "read": true, 00:23:46.920 "write": true, 00:23:46.920 "unmap": true, 00:23:46.920 "flush": true, 00:23:46.920 "reset": true, 00:23:46.920 "nvme_admin": false, 00:23:46.920 "nvme_io": false, 00:23:46.920 "nvme_io_md": false, 00:23:46.920 "write_zeroes": true, 00:23:46.920 "zcopy": true, 00:23:46.920 "get_zone_info": false, 00:23:46.920 "zone_management": false, 00:23:46.920 "zone_append": false, 00:23:46.920 "compare": false, 00:23:46.920 "compare_and_write": false, 00:23:46.920 "abort": true, 00:23:46.920 "seek_hole": false, 00:23:46.920 "seek_data": false, 00:23:46.920 "copy": true, 00:23:46.920 "nvme_iov_md": false 00:23:46.920 }, 00:23:46.920 "memory_domains": [ 00:23:46.920 { 00:23:46.921 "dma_device_id": "system", 00:23:46.921 "dma_device_type": 1 00:23:46.921 }, 00:23:46.921 { 00:23:46.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.921 "dma_device_type": 2 00:23:46.921 } 00:23:46.921 ], 00:23:46.921 "driver_specific": {} 00:23:46.921 }' 00:23:46.921 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.921 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.921 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.921 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.921 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.178 10:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.178 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.178 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.178 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:47.178 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.437 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.437 "name": "BaseBdev3", 00:23:47.437 "aliases": [ 00:23:47.437 "30f9927b-c035-491c-a53f-8076c3b4f047" 00:23:47.437 ], 00:23:47.437 "product_name": "Malloc disk", 00:23:47.437 "block_size": 512, 00:23:47.437 "num_blocks": 65536, 00:23:47.437 "uuid": "30f9927b-c035-491c-a53f-8076c3b4f047", 00:23:47.437 "assigned_rate_limits": { 00:23:47.437 "rw_ios_per_sec": 0, 00:23:47.437 "rw_mbytes_per_sec": 0, 00:23:47.437 "r_mbytes_per_sec": 0, 00:23:47.437 "w_mbytes_per_sec": 0 00:23:47.437 }, 00:23:47.437 "claimed": true, 00:23:47.437 "claim_type": "exclusive_write", 00:23:47.437 "zoned": false, 00:23:47.437 "supported_io_types": { 00:23:47.437 "read": true, 00:23:47.437 "write": true, 00:23:47.437 "unmap": true, 00:23:47.437 "flush": true, 00:23:47.437 "reset": true, 00:23:47.437 "nvme_admin": false, 00:23:47.437 "nvme_io": false, 00:23:47.437 "nvme_io_md": false, 00:23:47.437 "write_zeroes": true, 00:23:47.437 "zcopy": true, 00:23:47.437 "get_zone_info": false, 00:23:47.437 "zone_management": false, 00:23:47.437 "zone_append": false, 00:23:47.437 "compare": false, 00:23:47.437 "compare_and_write": false, 00:23:47.437 "abort": true, 00:23:47.437 "seek_hole": false, 00:23:47.437 "seek_data": false, 00:23:47.437 "copy": true, 00:23:47.437 "nvme_iov_md": false 00:23:47.437 }, 00:23:47.437 "memory_domains": [ 00:23:47.437 { 00:23:47.437 "dma_device_id": "system", 00:23:47.437 "dma_device_type": 1 00:23:47.437 }, 00:23:47.437 { 00:23:47.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.437 "dma_device_type": 2 00:23:47.437 } 00:23:47.437 ], 00:23:47.437 "driver_specific": {} 00:23:47.437 }' 00:23:47.437 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.437 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:47.695 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.954 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.954 "name": "BaseBdev4", 00:23:47.954 "aliases": [ 00:23:47.954 "7ea3698a-47c2-4134-9b33-b6b23feba1f3" 00:23:47.954 ], 00:23:47.954 "product_name": "Malloc disk", 00:23:47.954 "block_size": 512, 00:23:47.954 "num_blocks": 65536, 00:23:47.954 "uuid": "7ea3698a-47c2-4134-9b33-b6b23feba1f3", 00:23:47.954 "assigned_rate_limits": { 00:23:47.954 "rw_ios_per_sec": 0, 00:23:47.954 "rw_mbytes_per_sec": 0, 00:23:47.954 "r_mbytes_per_sec": 0, 00:23:47.954 "w_mbytes_per_sec": 0 00:23:47.954 }, 00:23:47.954 "claimed": true, 00:23:47.954 "claim_type": "exclusive_write", 00:23:47.954 "zoned": false, 00:23:47.954 "supported_io_types": { 00:23:47.954 "read": true, 00:23:47.954 "write": true, 00:23:47.954 "unmap": true, 00:23:47.954 "flush": true, 00:23:47.954 "reset": true, 00:23:47.954 "nvme_admin": false, 00:23:47.954 "nvme_io": false, 00:23:47.954 "nvme_io_md": false, 00:23:47.954 "write_zeroes": true, 00:23:47.954 "zcopy": true, 00:23:47.954 "get_zone_info": false, 00:23:47.954 "zone_management": false, 00:23:47.954 "zone_append": false, 00:23:47.954 "compare": false, 00:23:47.954 "compare_and_write": false, 00:23:47.954 "abort": true, 00:23:47.954 "seek_hole": false, 00:23:47.954 "seek_data": false, 00:23:47.954 "copy": true, 00:23:47.954 "nvme_iov_md": false 00:23:47.954 }, 00:23:47.954 "memory_domains": [ 00:23:47.954 { 00:23:47.954 "dma_device_id": "system", 00:23:47.954 "dma_device_type": 1 00:23:47.954 }, 00:23:47.954 { 00:23:47.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.954 "dma_device_type": 2 00:23:47.954 } 00:23:47.954 ], 00:23:47.954 "driver_specific": {} 00:23:47.954 }' 00:23:47.954 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:48.212 10:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.212 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.212 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:48.212 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.471 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.471 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:48.471 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:48.471 [2024-07-26 10:34:01.372100] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:48.471 [2024-07-26 10:34:01.372123] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.471 [2024-07-26 10:34:01.372173] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.471 [2024-07-26 10:34:01.372409] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.471 [2024-07-26 10:34:01.372420] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f4a640 name Existed_Raid, state offline 00:23:48.729 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3459879 00:23:48.729 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3459879 ']' 00:23:48.729 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3459879 00:23:48.729 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:48.729 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3459879 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3459879' 00:23:48.730 killing process with pid 3459879 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3459879 00:23:48.730 [2024-07-26 10:34:01.445743] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.730 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3459879 00:23:48.730 [2024-07-26 10:34:01.475949] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.988 10:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:48.988 00:23:48.988 real 0m30.500s 00:23:48.988 user 0m55.937s 00:23:48.988 sys 0m5.649s 00:23:48.988 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:48.988 10:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.988 ************************************ 00:23:48.988 END TEST raid_state_function_test_sb 00:23:48.988 ************************************ 00:23:48.988 10:34:01 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:23:48.988 10:34:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:48.988 10:34:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.988 10:34:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.988 ************************************ 00:23:48.988 START TEST raid_superblock_test 00:23:48.988 ************************************ 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3465568 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3465568 /var/tmp/spdk-raid.sock 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3465568 ']' 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:48.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.988 10:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.988 [2024-07-26 10:34:01.784088] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:23:48.988 [2024-07-26 10:34:01.784155] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465568 ] 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:48.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:48.988 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:49.246 [2024-07-26 10:34:01.919292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.246 [2024-07-26 10:34:01.963822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.246 [2024-07-26 10:34:02.024960] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.246 [2024-07-26 10:34:02.024998] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:49.813 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:50.069 malloc1 00:23:50.069 10:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:50.345 [2024-07-26 10:34:03.124435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:50.345 [2024-07-26 10:34:03.124480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.345 [2024-07-26 10:34:03.124499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151e270 00:23:50.345 [2024-07-26 10:34:03.124511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.345 [2024-07-26 10:34:03.125926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.345 [2024-07-26 10:34:03.125952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:50.345 pt1 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.345 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:50.617 malloc2 00:23:50.617 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:50.875 [2024-07-26 10:34:03.585914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:50.875 [2024-07-26 10:34:03.585952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.875 [2024-07-26 10:34:03.585968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14da2f0 00:23:50.875 [2024-07-26 10:34:03.585980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.875 [2024-07-26 10:34:03.587418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.875 [2024-07-26 10:34:03.587444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:50.875 pt2 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:50.875 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:51.133 malloc3 00:23:51.133 10:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:51.133 [2024-07-26 10:34:04.035376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:51.133 [2024-07-26 10:34:04.035417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.133 [2024-07-26 10:34:04.035433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a4650 00:23:51.133 [2024-07-26 10:34:04.035445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.392 [2024-07-26 10:34:04.036928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.392 [2024-07-26 10:34:04.036956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:51.392 pt3 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:51.392 malloc4 00:23:51.392 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:51.649 [2024-07-26 10:34:04.484864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:51.649 [2024-07-26 10:34:04.484904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.649 [2024-07-26 10:34:04.484920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a5ce0 00:23:51.649 [2024-07-26 10:34:04.484932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.649 [2024-07-26 10:34:04.486227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.650 [2024-07-26 10:34:04.486253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:51.650 pt4 00:23:51.650 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:51.650 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:51.650 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:51.907 [2024-07-26 10:34:04.709481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:51.907 [2024-07-26 10:34:04.710644] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.907 [2024-07-26 10:34:04.710695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.907 [2024-07-26 10:34:04.710738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:51.907 [2024-07-26 10:34:04.710880] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x14a86e0 00:23:51.907 [2024-07-26 10:34:04.710890] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:51.907 [2024-07-26 10:34:04.711076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1383320 00:23:51.907 [2024-07-26 10:34:04.711215] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14a86e0 00:23:51.907 [2024-07-26 10:34:04.711225] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14a86e0 00:23:51.907 [2024-07-26 10:34:04.711331] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.907 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.165 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.165 "name": "raid_bdev1", 00:23:52.165 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:23:52.165 "strip_size_kb": 0, 00:23:52.165 "state": "online", 00:23:52.165 "raid_level": "raid1", 00:23:52.165 "superblock": true, 00:23:52.165 "num_base_bdevs": 4, 00:23:52.165 "num_base_bdevs_discovered": 4, 00:23:52.165 "num_base_bdevs_operational": 4, 00:23:52.165 "base_bdevs_list": [ 00:23:52.165 { 00:23:52.165 "name": "pt1", 00:23:52.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.165 "is_configured": true, 00:23:52.165 "data_offset": 2048, 00:23:52.165 "data_size": 63488 00:23:52.165 }, 00:23:52.165 { 00:23:52.165 "name": "pt2", 00:23:52.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.165 "is_configured": true, 00:23:52.165 "data_offset": 2048, 00:23:52.165 "data_size": 63488 00:23:52.165 }, 00:23:52.165 { 00:23:52.165 "name": "pt3", 00:23:52.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:52.165 "is_configured": true, 00:23:52.165 "data_offset": 2048, 00:23:52.165 "data_size": 63488 00:23:52.165 }, 00:23:52.165 { 00:23:52.165 "name": "pt4", 00:23:52.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:52.165 "is_configured": true, 00:23:52.165 "data_offset": 2048, 00:23:52.165 "data_size": 63488 00:23:52.165 } 00:23:52.165 ] 00:23:52.165 }' 00:23:52.165 10:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.165 10:34:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:52.730 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:52.988 [2024-07-26 10:34:05.676293] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:52.988 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:52.988 "name": "raid_bdev1", 00:23:52.988 "aliases": [ 00:23:52.988 "a86a9171-4d78-48c1-ad97-96d4d8bc4be5" 00:23:52.988 ], 00:23:52.988 "product_name": "Raid Volume", 00:23:52.988 "block_size": 512, 00:23:52.988 "num_blocks": 63488, 00:23:52.988 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:23:52.988 "assigned_rate_limits": { 00:23:52.988 "rw_ios_per_sec": 0, 00:23:52.988 "rw_mbytes_per_sec": 0, 00:23:52.988 "r_mbytes_per_sec": 0, 00:23:52.988 "w_mbytes_per_sec": 0 00:23:52.988 }, 00:23:52.988 "claimed": false, 00:23:52.988 "zoned": false, 00:23:52.988 "supported_io_types": { 00:23:52.988 "read": true, 00:23:52.988 "write": true, 00:23:52.988 "unmap": false, 00:23:52.988 "flush": false, 00:23:52.988 "reset": true, 00:23:52.988 "nvme_admin": false, 00:23:52.988 "nvme_io": false, 00:23:52.988 "nvme_io_md": false, 00:23:52.988 "write_zeroes": true, 00:23:52.988 "zcopy": false, 00:23:52.988 "get_zone_info": false, 00:23:52.988 "zone_management": false, 00:23:52.988 "zone_append": false, 00:23:52.988 "compare": false, 00:23:52.988 "compare_and_write": false, 00:23:52.988 "abort": false, 00:23:52.988 "seek_hole": false, 00:23:52.988 "seek_data": false, 00:23:52.988 "copy": false, 00:23:52.988 "nvme_iov_md": false 00:23:52.988 }, 00:23:52.988 "memory_domains": [ 00:23:52.988 { 00:23:52.988 "dma_device_id": "system", 00:23:52.988 "dma_device_type": 1 00:23:52.988 }, 00:23:52.988 { 00:23:52.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.988 "dma_device_type": 2 00:23:52.988 }, 00:23:52.988 { 00:23:52.988 "dma_device_id": "system", 00:23:52.988 "dma_device_type": 1 00:23:52.988 }, 00:23:52.988 { 00:23:52.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.988 "dma_device_type": 2 00:23:52.988 }, 00:23:52.988 { 00:23:52.988 "dma_device_id": "system", 00:23:52.989 "dma_device_type": 1 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.989 "dma_device_type": 2 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "dma_device_id": "system", 00:23:52.989 "dma_device_type": 1 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.989 "dma_device_type": 2 00:23:52.989 } 00:23:52.989 ], 00:23:52.989 "driver_specific": { 00:23:52.989 "raid": { 00:23:52.989 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:23:52.989 "strip_size_kb": 0, 00:23:52.989 "state": "online", 00:23:52.989 "raid_level": "raid1", 00:23:52.989 "superblock": true, 00:23:52.989 "num_base_bdevs": 4, 00:23:52.989 "num_base_bdevs_discovered": 4, 00:23:52.989 "num_base_bdevs_operational": 4, 00:23:52.989 "base_bdevs_list": [ 00:23:52.989 { 00:23:52.989 "name": "pt1", 00:23:52.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:52.989 "is_configured": true, 00:23:52.989 "data_offset": 2048, 00:23:52.989 "data_size": 63488 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "name": "pt2", 00:23:52.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.989 "is_configured": true, 00:23:52.989 "data_offset": 2048, 00:23:52.989 "data_size": 63488 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "name": "pt3", 00:23:52.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:52.989 "is_configured": true, 00:23:52.989 "data_offset": 2048, 00:23:52.989 "data_size": 63488 00:23:52.989 }, 00:23:52.989 { 00:23:52.989 "name": "pt4", 00:23:52.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:52.989 "is_configured": true, 00:23:52.989 "data_offset": 2048, 00:23:52.989 "data_size": 63488 00:23:52.989 } 00:23:52.989 ] 00:23:52.989 } 00:23:52.989 } 00:23:52.989 }' 00:23:52.989 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:52.989 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:52.989 pt2 00:23:52.989 pt3 00:23:52.989 pt4' 00:23:52.989 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:52.989 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:52.989 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.247 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.247 "name": "pt1", 00:23:53.247 "aliases": [ 00:23:53.247 "00000000-0000-0000-0000-000000000001" 00:23:53.247 ], 00:23:53.247 "product_name": "passthru", 00:23:53.247 "block_size": 512, 00:23:53.247 "num_blocks": 65536, 00:23:53.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:53.247 "assigned_rate_limits": { 00:23:53.247 "rw_ios_per_sec": 0, 00:23:53.247 "rw_mbytes_per_sec": 0, 00:23:53.247 "r_mbytes_per_sec": 0, 00:23:53.247 "w_mbytes_per_sec": 0 00:23:53.247 }, 00:23:53.247 "claimed": true, 00:23:53.247 "claim_type": "exclusive_write", 00:23:53.247 "zoned": false, 00:23:53.247 "supported_io_types": { 00:23:53.247 "read": true, 00:23:53.247 "write": true, 00:23:53.247 "unmap": true, 00:23:53.247 "flush": true, 00:23:53.247 "reset": true, 00:23:53.247 "nvme_admin": false, 00:23:53.247 "nvme_io": false, 00:23:53.247 "nvme_io_md": false, 00:23:53.247 "write_zeroes": true, 00:23:53.247 "zcopy": true, 00:23:53.247 "get_zone_info": false, 00:23:53.247 "zone_management": false, 00:23:53.247 "zone_append": false, 00:23:53.247 "compare": false, 00:23:53.247 "compare_and_write": false, 00:23:53.247 "abort": true, 00:23:53.247 "seek_hole": false, 00:23:53.247 "seek_data": false, 00:23:53.247 "copy": true, 00:23:53.247 "nvme_iov_md": false 00:23:53.247 }, 00:23:53.247 "memory_domains": [ 00:23:53.247 { 00:23:53.247 "dma_device_id": "system", 00:23:53.247 "dma_device_type": 1 00:23:53.247 }, 00:23:53.247 { 00:23:53.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.247 "dma_device_type": 2 00:23:53.247 } 00:23:53.247 ], 00:23:53.247 "driver_specific": { 00:23:53.247 "passthru": { 00:23:53.247 "name": "pt1", 00:23:53.247 "base_bdev_name": "malloc1" 00:23:53.247 } 00:23:53.247 } 00:23:53.247 }' 00:23:53.247 10:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.247 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.247 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.247 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.247 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:53.505 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:53.764 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:53.764 "name": "pt2", 00:23:53.764 "aliases": [ 00:23:53.764 "00000000-0000-0000-0000-000000000002" 00:23:53.764 ], 00:23:53.764 "product_name": "passthru", 00:23:53.764 "block_size": 512, 00:23:53.764 "num_blocks": 65536, 00:23:53.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.764 "assigned_rate_limits": { 00:23:53.764 "rw_ios_per_sec": 0, 00:23:53.764 "rw_mbytes_per_sec": 0, 00:23:53.764 "r_mbytes_per_sec": 0, 00:23:53.764 "w_mbytes_per_sec": 0 00:23:53.764 }, 00:23:53.764 "claimed": true, 00:23:53.764 "claim_type": "exclusive_write", 00:23:53.764 "zoned": false, 00:23:53.764 "supported_io_types": { 00:23:53.764 "read": true, 00:23:53.764 "write": true, 00:23:53.764 "unmap": true, 00:23:53.764 "flush": true, 00:23:53.764 "reset": true, 00:23:53.764 "nvme_admin": false, 00:23:53.764 "nvme_io": false, 00:23:53.764 "nvme_io_md": false, 00:23:53.764 "write_zeroes": true, 00:23:53.764 "zcopy": true, 00:23:53.764 "get_zone_info": false, 00:23:53.764 "zone_management": false, 00:23:53.764 "zone_append": false, 00:23:53.764 "compare": false, 00:23:53.764 "compare_and_write": false, 00:23:53.764 "abort": true, 00:23:53.764 "seek_hole": false, 00:23:53.764 "seek_data": false, 00:23:53.764 "copy": true, 00:23:53.764 "nvme_iov_md": false 00:23:53.764 }, 00:23:53.764 "memory_domains": [ 00:23:53.764 { 00:23:53.764 "dma_device_id": "system", 00:23:53.764 "dma_device_type": 1 00:23:53.764 }, 00:23:53.764 { 00:23:53.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.764 "dma_device_type": 2 00:23:53.764 } 00:23:53.764 ], 00:23:53.764 "driver_specific": { 00:23:53.764 "passthru": { 00:23:53.764 "name": "pt2", 00:23:53.764 "base_bdev_name": "malloc2" 00:23:53.764 } 00:23:53.764 } 00:23:53.764 }' 00:23:53.764 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.764 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:53.764 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:53.764 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.021 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.022 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.022 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:54.022 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:54.022 10:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:54.279 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:54.279 "name": "pt3", 00:23:54.279 "aliases": [ 00:23:54.279 "00000000-0000-0000-0000-000000000003" 00:23:54.279 ], 00:23:54.279 "product_name": "passthru", 00:23:54.279 "block_size": 512, 00:23:54.279 "num_blocks": 65536, 00:23:54.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:54.279 "assigned_rate_limits": { 00:23:54.279 "rw_ios_per_sec": 0, 00:23:54.279 "rw_mbytes_per_sec": 0, 00:23:54.279 "r_mbytes_per_sec": 0, 00:23:54.279 "w_mbytes_per_sec": 0 00:23:54.279 }, 00:23:54.279 "claimed": true, 00:23:54.279 "claim_type": "exclusive_write", 00:23:54.279 "zoned": false, 00:23:54.279 "supported_io_types": { 00:23:54.279 "read": true, 00:23:54.279 "write": true, 00:23:54.279 "unmap": true, 00:23:54.279 "flush": true, 00:23:54.279 "reset": true, 00:23:54.279 "nvme_admin": false, 00:23:54.279 "nvme_io": false, 00:23:54.279 "nvme_io_md": false, 00:23:54.279 "write_zeroes": true, 00:23:54.279 "zcopy": true, 00:23:54.279 "get_zone_info": false, 00:23:54.279 "zone_management": false, 00:23:54.279 "zone_append": false, 00:23:54.279 "compare": false, 00:23:54.279 "compare_and_write": false, 00:23:54.279 "abort": true, 00:23:54.279 "seek_hole": false, 00:23:54.279 "seek_data": false, 00:23:54.279 "copy": true, 00:23:54.279 "nvme_iov_md": false 00:23:54.279 }, 00:23:54.279 "memory_domains": [ 00:23:54.279 { 00:23:54.279 "dma_device_id": "system", 00:23:54.279 "dma_device_type": 1 00:23:54.279 }, 00:23:54.279 { 00:23:54.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.279 "dma_device_type": 2 00:23:54.279 } 00:23:54.279 ], 00:23:54.279 "driver_specific": { 00:23:54.279 "passthru": { 00:23:54.279 "name": "pt3", 00:23:54.279 "base_bdev_name": "malloc3" 00:23:54.279 } 00:23:54.279 } 00:23:54.279 }' 00:23:54.279 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.537 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.795 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:54.795 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.795 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:54.795 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:54.795 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:55.053 "name": "pt4", 00:23:55.053 "aliases": [ 00:23:55.053 "00000000-0000-0000-0000-000000000004" 00:23:55.053 ], 00:23:55.053 "product_name": "passthru", 00:23:55.053 "block_size": 512, 00:23:55.053 "num_blocks": 65536, 00:23:55.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:55.053 "assigned_rate_limits": { 00:23:55.053 "rw_ios_per_sec": 0, 00:23:55.053 "rw_mbytes_per_sec": 0, 00:23:55.053 "r_mbytes_per_sec": 0, 00:23:55.053 "w_mbytes_per_sec": 0 00:23:55.053 }, 00:23:55.053 "claimed": true, 00:23:55.053 "claim_type": "exclusive_write", 00:23:55.053 "zoned": false, 00:23:55.053 "supported_io_types": { 00:23:55.053 "read": true, 00:23:55.053 "write": true, 00:23:55.053 "unmap": true, 00:23:55.053 "flush": true, 00:23:55.053 "reset": true, 00:23:55.053 "nvme_admin": false, 00:23:55.053 "nvme_io": false, 00:23:55.053 "nvme_io_md": false, 00:23:55.053 "write_zeroes": true, 00:23:55.053 "zcopy": true, 00:23:55.053 "get_zone_info": false, 00:23:55.053 "zone_management": false, 00:23:55.053 "zone_append": false, 00:23:55.053 "compare": false, 00:23:55.053 "compare_and_write": false, 00:23:55.053 "abort": true, 00:23:55.053 "seek_hole": false, 00:23:55.053 "seek_data": false, 00:23:55.053 "copy": true, 00:23:55.053 "nvme_iov_md": false 00:23:55.053 }, 00:23:55.053 "memory_domains": [ 00:23:55.053 { 00:23:55.053 "dma_device_id": "system", 00:23:55.053 "dma_device_type": 1 00:23:55.053 }, 00:23:55.053 { 00:23:55.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.053 "dma_device_type": 2 00:23:55.053 } 00:23:55.053 ], 00:23:55.053 "driver_specific": { 00:23:55.053 "passthru": { 00:23:55.053 "name": "pt4", 00:23:55.053 "base_bdev_name": "malloc4" 00:23:55.053 } 00:23:55.053 } 00:23:55.053 }' 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:55.053 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:55.311 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:55.311 10:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:55.311 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.311 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:55.311 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:23:55.569 [2024-07-26 10:34:08.255274] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:55.569 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=a86a9171-4d78-48c1-ad97-96d4d8bc4be5 00:23:55.569 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z a86a9171-4d78-48c1-ad97-96d4d8bc4be5 ']' 00:23:55.569 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:55.827 [2024-07-26 10:34:08.483591] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.827 [2024-07-26 10:34:08.483611] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.827 [2024-07-26 10:34:08.483657] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.827 [2024-07-26 10:34:08.483734] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.827 [2024-07-26 10:34:08.483746] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14a86e0 name raid_bdev1, state offline 00:23:55.827 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.827 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.085 10:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:56.342 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.342 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:56.600 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:56.600 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:56.858 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:56.858 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:57.116 10:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:57.374 [2024-07-26 10:34:10.075796] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:57.374 [2024-07-26 10:34:10.077031] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:57.374 [2024-07-26 10:34:10.077070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:57.374 [2024-07-26 10:34:10.077104] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:57.374 [2024-07-26 10:34:10.077153] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:57.374 [2024-07-26 10:34:10.077189] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:57.374 [2024-07-26 10:34:10.077210] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:57.374 [2024-07-26 10:34:10.077231] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:57.374 [2024-07-26 10:34:10.077248] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.374 [2024-07-26 10:34:10.077257] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14a86e0 name raid_bdev1, state configuring 00:23:57.374 request: 00:23:57.374 { 00:23:57.374 "name": "raid_bdev1", 00:23:57.374 "raid_level": "raid1", 00:23:57.374 "base_bdevs": [ 00:23:57.374 "malloc1", 00:23:57.374 "malloc2", 00:23:57.374 "malloc3", 00:23:57.374 "malloc4" 00:23:57.374 ], 00:23:57.374 "superblock": false, 00:23:57.374 "method": "bdev_raid_create", 00:23:57.374 "req_id": 1 00:23:57.374 } 00:23:57.374 Got JSON-RPC error response 00:23:57.374 response: 00:23:57.374 { 00:23:57.374 "code": -17, 00:23:57.374 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:57.374 } 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.374 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:57.631 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:57.631 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:57.631 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:57.631 [2024-07-26 10:34:10.532944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:57.631 [2024-07-26 10:34:10.532983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.631 [2024-07-26 10:34:10.533000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14d9a60 00:23:57.631 [2024-07-26 10:34:10.533012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.888 [2024-07-26 10:34:10.534463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.888 [2024-07-26 10:34:10.534490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:57.888 [2024-07-26 10:34:10.534548] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:57.888 [2024-07-26 10:34:10.534572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:57.888 pt1 00:23:57.888 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.889 "name": "raid_bdev1", 00:23:57.889 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:23:57.889 "strip_size_kb": 0, 00:23:57.889 "state": "configuring", 00:23:57.889 "raid_level": "raid1", 00:23:57.889 "superblock": true, 00:23:57.889 "num_base_bdevs": 4, 00:23:57.889 "num_base_bdevs_discovered": 1, 00:23:57.889 "num_base_bdevs_operational": 4, 00:23:57.889 "base_bdevs_list": [ 00:23:57.889 { 00:23:57.889 "name": "pt1", 00:23:57.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.889 "is_configured": true, 00:23:57.889 "data_offset": 2048, 00:23:57.889 "data_size": 63488 00:23:57.889 }, 00:23:57.889 { 00:23:57.889 "name": null, 00:23:57.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.889 "is_configured": false, 00:23:57.889 "data_offset": 2048, 00:23:57.889 "data_size": 63488 00:23:57.889 }, 00:23:57.889 { 00:23:57.889 "name": null, 00:23:57.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:57.889 "is_configured": false, 00:23:57.889 "data_offset": 2048, 00:23:57.889 "data_size": 63488 00:23:57.889 }, 00:23:57.889 { 00:23:57.889 "name": null, 00:23:57.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:57.889 "is_configured": false, 00:23:57.889 "data_offset": 2048, 00:23:57.889 "data_size": 63488 00:23:57.889 } 00:23:57.889 ] 00:23:57.889 }' 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.889 10:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.453 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:23:58.453 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:58.710 [2024-07-26 10:34:11.483458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:58.710 [2024-07-26 10:34:11.483500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.710 [2024-07-26 10:34:11.483520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a8ed0 00:23:58.710 [2024-07-26 10:34:11.483531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.710 [2024-07-26 10:34:11.483844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.710 [2024-07-26 10:34:11.483860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:58.710 [2024-07-26 10:34:11.483912] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:58.710 [2024-07-26 10:34:11.483930] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.710 pt2 00:23:58.710 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:58.967 [2024-07-26 10:34:11.655924] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:58.967 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:23:58.967 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.967 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.967 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.967 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.968 "name": "raid_bdev1", 00:23:58.968 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:23:58.968 "strip_size_kb": 0, 00:23:58.968 "state": "configuring", 00:23:58.968 "raid_level": "raid1", 00:23:58.968 "superblock": true, 00:23:58.968 "num_base_bdevs": 4, 00:23:58.968 "num_base_bdevs_discovered": 1, 00:23:58.968 "num_base_bdevs_operational": 4, 00:23:58.968 "base_bdevs_list": [ 00:23:58.968 { 00:23:58.968 "name": "pt1", 00:23:58.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.968 "is_configured": true, 00:23:58.968 "data_offset": 2048, 00:23:58.968 "data_size": 63488 00:23:58.968 }, 00:23:58.968 { 00:23:58.968 "name": null, 00:23:58.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.968 "is_configured": false, 00:23:58.968 "data_offset": 2048, 00:23:58.968 "data_size": 63488 00:23:58.968 }, 00:23:58.968 { 00:23:58.968 "name": null, 00:23:58.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:58.968 "is_configured": false, 00:23:58.968 "data_offset": 2048, 00:23:58.968 "data_size": 63488 00:23:58.968 }, 00:23:58.968 { 00:23:58.968 "name": null, 00:23:58.968 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:58.968 "is_configured": false, 00:23:58.968 "data_offset": 2048, 00:23:58.968 "data_size": 63488 00:23:58.968 } 00:23:58.968 ] 00:23:58.968 }' 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.968 10:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.532 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:59.532 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:59.532 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:59.790 [2024-07-26 10:34:12.634653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:59.790 [2024-07-26 10:34:12.634697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.790 [2024-07-26 10:34:12.634714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a9980 00:23:59.790 [2024-07-26 10:34:12.634726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.790 [2024-07-26 10:34:12.635030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.790 [2024-07-26 10:34:12.635046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:59.790 [2024-07-26 10:34:12.635099] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:59.790 [2024-07-26 10:34:12.635116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.790 pt2 00:23:59.790 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:59.790 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:59.790 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:00.047 [2024-07-26 10:34:12.863436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:00.047 [2024-07-26 10:34:12.863479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.047 [2024-07-26 10:34:12.863495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a9220 00:24:00.047 [2024-07-26 10:34:12.863507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.047 [2024-07-26 10:34:12.863800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.047 [2024-07-26 10:34:12.863817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:00.047 [2024-07-26 10:34:12.863870] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:00.047 [2024-07-26 10:34:12.863887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:00.047 pt3 00:24:00.047 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:00.047 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:00.047 10:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:00.305 [2024-07-26 10:34:13.092036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:00.305 [2024-07-26 10:34:13.092072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.305 [2024-07-26 10:34:13.092088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136d640 00:24:00.305 [2024-07-26 10:34:13.092100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.305 [2024-07-26 10:34:13.092385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.305 [2024-07-26 10:34:13.092403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:00.305 [2024-07-26 10:34:13.092454] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:00.305 [2024-07-26 10:34:13.092472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:00.305 [2024-07-26 10:34:13.092583] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x14a7390 00:24:00.305 [2024-07-26 10:34:13.092593] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:00.305 [2024-07-26 10:34:13.092757] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14ac160 00:24:00.305 [2024-07-26 10:34:13.092880] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14a7390 00:24:00.305 [2024-07-26 10:34:13.092889] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14a7390 00:24:00.305 [2024-07-26 10:34:13.092976] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.305 pt4 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.305 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.564 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.564 "name": "raid_bdev1", 00:24:00.564 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:00.564 "strip_size_kb": 0, 00:24:00.564 "state": "online", 00:24:00.564 "raid_level": "raid1", 00:24:00.564 "superblock": true, 00:24:00.564 "num_base_bdevs": 4, 00:24:00.564 "num_base_bdevs_discovered": 4, 00:24:00.564 "num_base_bdevs_operational": 4, 00:24:00.564 "base_bdevs_list": [ 00:24:00.564 { 00:24:00.564 "name": "pt1", 00:24:00.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.564 "is_configured": true, 00:24:00.564 "data_offset": 2048, 00:24:00.564 "data_size": 63488 00:24:00.564 }, 00:24:00.564 { 00:24:00.564 "name": "pt2", 00:24:00.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.564 "is_configured": true, 00:24:00.564 "data_offset": 2048, 00:24:00.564 "data_size": 63488 00:24:00.564 }, 00:24:00.564 { 00:24:00.564 "name": "pt3", 00:24:00.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:00.564 "is_configured": true, 00:24:00.564 "data_offset": 2048, 00:24:00.564 "data_size": 63488 00:24:00.564 }, 00:24:00.564 { 00:24:00.565 "name": "pt4", 00:24:00.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:00.565 "is_configured": true, 00:24:00.565 "data_offset": 2048, 00:24:00.565 "data_size": 63488 00:24:00.565 } 00:24:00.565 ] 00:24:00.565 }' 00:24:00.565 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.565 10:34:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:01.129 10:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:01.386 [2024-07-26 10:34:14.038829] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:01.386 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:01.386 "name": "raid_bdev1", 00:24:01.386 "aliases": [ 00:24:01.386 "a86a9171-4d78-48c1-ad97-96d4d8bc4be5" 00:24:01.386 ], 00:24:01.386 "product_name": "Raid Volume", 00:24:01.386 "block_size": 512, 00:24:01.386 "num_blocks": 63488, 00:24:01.386 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:01.386 "assigned_rate_limits": { 00:24:01.386 "rw_ios_per_sec": 0, 00:24:01.386 "rw_mbytes_per_sec": 0, 00:24:01.386 "r_mbytes_per_sec": 0, 00:24:01.386 "w_mbytes_per_sec": 0 00:24:01.386 }, 00:24:01.386 "claimed": false, 00:24:01.386 "zoned": false, 00:24:01.386 "supported_io_types": { 00:24:01.386 "read": true, 00:24:01.386 "write": true, 00:24:01.386 "unmap": false, 00:24:01.386 "flush": false, 00:24:01.386 "reset": true, 00:24:01.386 "nvme_admin": false, 00:24:01.386 "nvme_io": false, 00:24:01.386 "nvme_io_md": false, 00:24:01.386 "write_zeroes": true, 00:24:01.386 "zcopy": false, 00:24:01.386 "get_zone_info": false, 00:24:01.386 "zone_management": false, 00:24:01.386 "zone_append": false, 00:24:01.386 "compare": false, 00:24:01.386 "compare_and_write": false, 00:24:01.386 "abort": false, 00:24:01.386 "seek_hole": false, 00:24:01.386 "seek_data": false, 00:24:01.386 "copy": false, 00:24:01.386 "nvme_iov_md": false 00:24:01.386 }, 00:24:01.386 "memory_domains": [ 00:24:01.386 { 00:24:01.386 "dma_device_id": "system", 00:24:01.386 "dma_device_type": 1 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.386 "dma_device_type": 2 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "system", 00:24:01.386 "dma_device_type": 1 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.386 "dma_device_type": 2 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "system", 00:24:01.386 "dma_device_type": 1 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.386 "dma_device_type": 2 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "system", 00:24:01.386 "dma_device_type": 1 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.386 "dma_device_type": 2 00:24:01.386 } 00:24:01.386 ], 00:24:01.386 "driver_specific": { 00:24:01.386 "raid": { 00:24:01.386 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:01.386 "strip_size_kb": 0, 00:24:01.386 "state": "online", 00:24:01.386 "raid_level": "raid1", 00:24:01.386 "superblock": true, 00:24:01.386 "num_base_bdevs": 4, 00:24:01.386 "num_base_bdevs_discovered": 4, 00:24:01.386 "num_base_bdevs_operational": 4, 00:24:01.386 "base_bdevs_list": [ 00:24:01.386 { 00:24:01.386 "name": "pt1", 00:24:01.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:01.386 "is_configured": true, 00:24:01.386 "data_offset": 2048, 00:24:01.386 "data_size": 63488 00:24:01.386 }, 00:24:01.386 { 00:24:01.386 "name": "pt2", 00:24:01.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.387 "is_configured": true, 00:24:01.387 "data_offset": 2048, 00:24:01.387 "data_size": 63488 00:24:01.387 }, 00:24:01.387 { 00:24:01.387 "name": "pt3", 00:24:01.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:01.387 "is_configured": true, 00:24:01.387 "data_offset": 2048, 00:24:01.387 "data_size": 63488 00:24:01.387 }, 00:24:01.387 { 00:24:01.387 "name": "pt4", 00:24:01.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:01.387 "is_configured": true, 00:24:01.387 "data_offset": 2048, 00:24:01.387 "data_size": 63488 00:24:01.387 } 00:24:01.387 ] 00:24:01.387 } 00:24:01.387 } 00:24:01.387 }' 00:24:01.387 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:01.387 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:01.387 pt2 00:24:01.387 pt3 00:24:01.387 pt4' 00:24:01.387 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.387 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:01.387 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.645 "name": "pt1", 00:24:01.645 "aliases": [ 00:24:01.645 "00000000-0000-0000-0000-000000000001" 00:24:01.645 ], 00:24:01.645 "product_name": "passthru", 00:24:01.645 "block_size": 512, 00:24:01.645 "num_blocks": 65536, 00:24:01.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:01.645 "assigned_rate_limits": { 00:24:01.645 "rw_ios_per_sec": 0, 00:24:01.645 "rw_mbytes_per_sec": 0, 00:24:01.645 "r_mbytes_per_sec": 0, 00:24:01.645 "w_mbytes_per_sec": 0 00:24:01.645 }, 00:24:01.645 "claimed": true, 00:24:01.645 "claim_type": "exclusive_write", 00:24:01.645 "zoned": false, 00:24:01.645 "supported_io_types": { 00:24:01.645 "read": true, 00:24:01.645 "write": true, 00:24:01.645 "unmap": true, 00:24:01.645 "flush": true, 00:24:01.645 "reset": true, 00:24:01.645 "nvme_admin": false, 00:24:01.645 "nvme_io": false, 00:24:01.645 "nvme_io_md": false, 00:24:01.645 "write_zeroes": true, 00:24:01.645 "zcopy": true, 00:24:01.645 "get_zone_info": false, 00:24:01.645 "zone_management": false, 00:24:01.645 "zone_append": false, 00:24:01.645 "compare": false, 00:24:01.645 "compare_and_write": false, 00:24:01.645 "abort": true, 00:24:01.645 "seek_hole": false, 00:24:01.645 "seek_data": false, 00:24:01.645 "copy": true, 00:24:01.645 "nvme_iov_md": false 00:24:01.645 }, 00:24:01.645 "memory_domains": [ 00:24:01.645 { 00:24:01.645 "dma_device_id": "system", 00:24:01.645 "dma_device_type": 1 00:24:01.645 }, 00:24:01.645 { 00:24:01.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.645 "dma_device_type": 2 00:24:01.645 } 00:24:01.645 ], 00:24:01.645 "driver_specific": { 00:24:01.645 "passthru": { 00:24:01.645 "name": "pt1", 00:24:01.645 "base_bdev_name": "malloc1" 00:24:01.645 } 00:24:01.645 } 00:24:01.645 }' 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.645 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:01.920 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.201 "name": "pt2", 00:24:02.201 "aliases": [ 00:24:02.201 "00000000-0000-0000-0000-000000000002" 00:24:02.201 ], 00:24:02.201 "product_name": "passthru", 00:24:02.201 "block_size": 512, 00:24:02.201 "num_blocks": 65536, 00:24:02.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:02.201 "assigned_rate_limits": { 00:24:02.201 "rw_ios_per_sec": 0, 00:24:02.201 "rw_mbytes_per_sec": 0, 00:24:02.201 "r_mbytes_per_sec": 0, 00:24:02.201 "w_mbytes_per_sec": 0 00:24:02.201 }, 00:24:02.201 "claimed": true, 00:24:02.201 "claim_type": "exclusive_write", 00:24:02.201 "zoned": false, 00:24:02.201 "supported_io_types": { 00:24:02.201 "read": true, 00:24:02.201 "write": true, 00:24:02.201 "unmap": true, 00:24:02.201 "flush": true, 00:24:02.201 "reset": true, 00:24:02.201 "nvme_admin": false, 00:24:02.201 "nvme_io": false, 00:24:02.201 "nvme_io_md": false, 00:24:02.201 "write_zeroes": true, 00:24:02.201 "zcopy": true, 00:24:02.201 "get_zone_info": false, 00:24:02.201 "zone_management": false, 00:24:02.201 "zone_append": false, 00:24:02.201 "compare": false, 00:24:02.201 "compare_and_write": false, 00:24:02.201 "abort": true, 00:24:02.201 "seek_hole": false, 00:24:02.201 "seek_data": false, 00:24:02.201 "copy": true, 00:24:02.201 "nvme_iov_md": false 00:24:02.201 }, 00:24:02.201 "memory_domains": [ 00:24:02.201 { 00:24:02.201 "dma_device_id": "system", 00:24:02.201 "dma_device_type": 1 00:24:02.201 }, 00:24:02.201 { 00:24:02.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.201 "dma_device_type": 2 00:24:02.201 } 00:24:02.201 ], 00:24:02.201 "driver_specific": { 00:24:02.201 "passthru": { 00:24:02.201 "name": "pt2", 00:24:02.201 "base_bdev_name": "malloc2" 00:24:02.201 } 00:24:02.201 } 00:24:02.201 }' 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.201 10:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.201 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.201 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.202 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.458 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:02.459 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.716 "name": "pt3", 00:24:02.716 "aliases": [ 00:24:02.716 "00000000-0000-0000-0000-000000000003" 00:24:02.716 ], 00:24:02.716 "product_name": "passthru", 00:24:02.716 "block_size": 512, 00:24:02.716 "num_blocks": 65536, 00:24:02.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:02.716 "assigned_rate_limits": { 00:24:02.716 "rw_ios_per_sec": 0, 00:24:02.716 "rw_mbytes_per_sec": 0, 00:24:02.716 "r_mbytes_per_sec": 0, 00:24:02.716 "w_mbytes_per_sec": 0 00:24:02.716 }, 00:24:02.716 "claimed": true, 00:24:02.716 "claim_type": "exclusive_write", 00:24:02.716 "zoned": false, 00:24:02.716 "supported_io_types": { 00:24:02.716 "read": true, 00:24:02.716 "write": true, 00:24:02.716 "unmap": true, 00:24:02.716 "flush": true, 00:24:02.716 "reset": true, 00:24:02.716 "nvme_admin": false, 00:24:02.716 "nvme_io": false, 00:24:02.716 "nvme_io_md": false, 00:24:02.716 "write_zeroes": true, 00:24:02.716 "zcopy": true, 00:24:02.716 "get_zone_info": false, 00:24:02.716 "zone_management": false, 00:24:02.716 "zone_append": false, 00:24:02.716 "compare": false, 00:24:02.716 "compare_and_write": false, 00:24:02.716 "abort": true, 00:24:02.716 "seek_hole": false, 00:24:02.716 "seek_data": false, 00:24:02.716 "copy": true, 00:24:02.716 "nvme_iov_md": false 00:24:02.716 }, 00:24:02.716 "memory_domains": [ 00:24:02.716 { 00:24:02.716 "dma_device_id": "system", 00:24:02.716 "dma_device_type": 1 00:24:02.716 }, 00:24:02.716 { 00:24:02.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.716 "dma_device_type": 2 00:24:02.716 } 00:24:02.716 ], 00:24:02.716 "driver_specific": { 00:24:02.716 "passthru": { 00:24:02.716 "name": "pt3", 00:24:02.716 "base_bdev_name": "malloc3" 00:24:02.716 } 00:24:02.716 } 00:24:02.716 }' 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.716 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:02.974 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:03.231 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:03.231 "name": "pt4", 00:24:03.231 "aliases": [ 00:24:03.231 "00000000-0000-0000-0000-000000000004" 00:24:03.231 ], 00:24:03.231 "product_name": "passthru", 00:24:03.231 "block_size": 512, 00:24:03.231 "num_blocks": 65536, 00:24:03.231 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:03.231 "assigned_rate_limits": { 00:24:03.231 "rw_ios_per_sec": 0, 00:24:03.231 "rw_mbytes_per_sec": 0, 00:24:03.231 "r_mbytes_per_sec": 0, 00:24:03.231 "w_mbytes_per_sec": 0 00:24:03.231 }, 00:24:03.231 "claimed": true, 00:24:03.231 "claim_type": "exclusive_write", 00:24:03.231 "zoned": false, 00:24:03.231 "supported_io_types": { 00:24:03.231 "read": true, 00:24:03.231 "write": true, 00:24:03.231 "unmap": true, 00:24:03.231 "flush": true, 00:24:03.231 "reset": true, 00:24:03.231 "nvme_admin": false, 00:24:03.231 "nvme_io": false, 00:24:03.231 "nvme_io_md": false, 00:24:03.231 "write_zeroes": true, 00:24:03.231 "zcopy": true, 00:24:03.231 "get_zone_info": false, 00:24:03.231 "zone_management": false, 00:24:03.231 "zone_append": false, 00:24:03.231 "compare": false, 00:24:03.231 "compare_and_write": false, 00:24:03.231 "abort": true, 00:24:03.231 "seek_hole": false, 00:24:03.231 "seek_data": false, 00:24:03.231 "copy": true, 00:24:03.231 "nvme_iov_md": false 00:24:03.231 }, 00:24:03.231 "memory_domains": [ 00:24:03.231 { 00:24:03.231 "dma_device_id": "system", 00:24:03.231 "dma_device_type": 1 00:24:03.231 }, 00:24:03.231 { 00:24:03.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.231 "dma_device_type": 2 00:24:03.231 } 00:24:03.231 ], 00:24:03.231 "driver_specific": { 00:24:03.231 "passthru": { 00:24:03.231 "name": "pt4", 00:24:03.231 "base_bdev_name": "malloc4" 00:24:03.231 } 00:24:03.231 } 00:24:03.231 }' 00:24:03.231 10:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.231 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.231 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:03.231 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.231 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:03.488 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:03.746 [2024-07-26 10:34:16.545425] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.746 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' a86a9171-4d78-48c1-ad97-96d4d8bc4be5 '!=' a86a9171-4d78-48c1-ad97-96d4d8bc4be5 ']' 00:24:03.746 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:24:03.746 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:03.746 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:03.746 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:04.003 [2024-07-26 10:34:16.773779] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.003 10:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.260 10:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.260 "name": "raid_bdev1", 00:24:04.260 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:04.260 "strip_size_kb": 0, 00:24:04.260 "state": "online", 00:24:04.260 "raid_level": "raid1", 00:24:04.260 "superblock": true, 00:24:04.260 "num_base_bdevs": 4, 00:24:04.260 "num_base_bdevs_discovered": 3, 00:24:04.260 "num_base_bdevs_operational": 3, 00:24:04.260 "base_bdevs_list": [ 00:24:04.260 { 00:24:04.260 "name": null, 00:24:04.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.260 "is_configured": false, 00:24:04.260 "data_offset": 2048, 00:24:04.260 "data_size": 63488 00:24:04.260 }, 00:24:04.260 { 00:24:04.260 "name": "pt2", 00:24:04.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:04.260 "is_configured": true, 00:24:04.260 "data_offset": 2048, 00:24:04.260 "data_size": 63488 00:24:04.260 }, 00:24:04.260 { 00:24:04.260 "name": "pt3", 00:24:04.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:04.260 "is_configured": true, 00:24:04.260 "data_offset": 2048, 00:24:04.260 "data_size": 63488 00:24:04.260 }, 00:24:04.260 { 00:24:04.260 "name": "pt4", 00:24:04.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:04.260 "is_configured": true, 00:24:04.260 "data_offset": 2048, 00:24:04.260 "data_size": 63488 00:24:04.260 } 00:24:04.260 ] 00:24:04.260 }' 00:24:04.260 10:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.260 10:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.824 10:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:05.079 [2024-07-26 10:34:17.780441] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:05.079 [2024-07-26 10:34:17.780465] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:05.079 [2024-07-26 10:34:17.780521] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.080 [2024-07-26 10:34:17.780588] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.080 [2024-07-26 10:34:17.780599] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14a7390 name raid_bdev1, state offline 00:24:05.080 10:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.080 10:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:24:05.336 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:24:05.336 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:24:05.336 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:05.336 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:05.336 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:05.593 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.593 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:05.593 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:05.593 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.593 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:05.850 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:06.106 [2024-07-26 10:34:18.891307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:06.106 [2024-07-26 10:34:18.891356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.107 [2024-07-26 10:34:18.891373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a8c20 00:24:06.107 [2024-07-26 10:34:18.891385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.107 [2024-07-26 10:34:18.892844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.107 [2024-07-26 10:34:18.892871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:06.107 [2024-07-26 10:34:18.892926] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:06.107 [2024-07-26 10:34:18.892950] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:06.107 pt2 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.107 10:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.363 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.363 "name": "raid_bdev1", 00:24:06.363 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:06.363 "strip_size_kb": 0, 00:24:06.363 "state": "configuring", 00:24:06.363 "raid_level": "raid1", 00:24:06.363 "superblock": true, 00:24:06.363 "num_base_bdevs": 4, 00:24:06.363 "num_base_bdevs_discovered": 1, 00:24:06.363 "num_base_bdevs_operational": 3, 00:24:06.363 "base_bdevs_list": [ 00:24:06.363 { 00:24:06.363 "name": null, 00:24:06.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.363 "is_configured": false, 00:24:06.363 "data_offset": 2048, 00:24:06.363 "data_size": 63488 00:24:06.363 }, 00:24:06.363 { 00:24:06.363 "name": "pt2", 00:24:06.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:06.363 "is_configured": true, 00:24:06.363 "data_offset": 2048, 00:24:06.363 "data_size": 63488 00:24:06.363 }, 00:24:06.363 { 00:24:06.363 "name": null, 00:24:06.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:06.363 "is_configured": false, 00:24:06.363 "data_offset": 2048, 00:24:06.363 "data_size": 63488 00:24:06.363 }, 00:24:06.363 { 00:24:06.363 "name": null, 00:24:06.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:06.363 "is_configured": false, 00:24:06.363 "data_offset": 2048, 00:24:06.363 "data_size": 63488 00:24:06.363 } 00:24:06.363 ] 00:24:06.363 }' 00:24:06.363 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.363 10:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.925 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:06.925 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:06.925 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:07.182 [2024-07-26 10:34:19.905983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:07.182 [2024-07-26 10:34:19.906023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.182 [2024-07-26 10:34:19.906040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14abf20 00:24:07.182 [2024-07-26 10:34:19.906052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.182 [2024-07-26 10:34:19.906368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.182 [2024-07-26 10:34:19.906385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:07.182 [2024-07-26 10:34:19.906438] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:07.182 [2024-07-26 10:34:19.906455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:07.182 pt3 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.182 10:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.439 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.439 "name": "raid_bdev1", 00:24:07.439 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:07.439 "strip_size_kb": 0, 00:24:07.439 "state": "configuring", 00:24:07.439 "raid_level": "raid1", 00:24:07.439 "superblock": true, 00:24:07.439 "num_base_bdevs": 4, 00:24:07.439 "num_base_bdevs_discovered": 2, 00:24:07.439 "num_base_bdevs_operational": 3, 00:24:07.439 "base_bdevs_list": [ 00:24:07.439 { 00:24:07.439 "name": null, 00:24:07.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.439 "is_configured": false, 00:24:07.439 "data_offset": 2048, 00:24:07.439 "data_size": 63488 00:24:07.439 }, 00:24:07.439 { 00:24:07.439 "name": "pt2", 00:24:07.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:07.439 "is_configured": true, 00:24:07.439 "data_offset": 2048, 00:24:07.439 "data_size": 63488 00:24:07.439 }, 00:24:07.439 { 00:24:07.439 "name": "pt3", 00:24:07.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:07.439 "is_configured": true, 00:24:07.439 "data_offset": 2048, 00:24:07.439 "data_size": 63488 00:24:07.439 }, 00:24:07.439 { 00:24:07.439 "name": null, 00:24:07.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:07.439 "is_configured": false, 00:24:07.439 "data_offset": 2048, 00:24:07.439 "data_size": 63488 00:24:07.439 } 00:24:07.439 ] 00:24:07.439 }' 00:24:07.439 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.439 10:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.002 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:24:08.002 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:24:08.002 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:24:08.002 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:08.258 [2024-07-26 10:34:20.936763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:08.258 [2024-07-26 10:34:20.936805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.258 [2024-07-26 10:34:20.936821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b9910 00:24:08.258 [2024-07-26 10:34:20.936833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.258 [2024-07-26 10:34:20.937134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.258 [2024-07-26 10:34:20.937156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:08.258 [2024-07-26 10:34:20.937227] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:08.258 [2024-07-26 10:34:20.937245] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:08.258 [2024-07-26 10:34:20.937347] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x14a9220 00:24:08.258 [2024-07-26 10:34:20.937357] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:08.258 [2024-07-26 10:34:20.937515] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14abed0 00:24:08.258 [2024-07-26 10:34:20.937634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14a9220 00:24:08.258 [2024-07-26 10:34:20.937644] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14a9220 00:24:08.258 [2024-07-26 10:34:20.937731] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.258 pt4 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.258 10:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.514 10:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.514 "name": "raid_bdev1", 00:24:08.514 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:08.514 "strip_size_kb": 0, 00:24:08.514 "state": "online", 00:24:08.514 "raid_level": "raid1", 00:24:08.514 "superblock": true, 00:24:08.514 "num_base_bdevs": 4, 00:24:08.514 "num_base_bdevs_discovered": 3, 00:24:08.514 "num_base_bdevs_operational": 3, 00:24:08.514 "base_bdevs_list": [ 00:24:08.514 { 00:24:08.514 "name": null, 00:24:08.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.514 "is_configured": false, 00:24:08.514 "data_offset": 2048, 00:24:08.514 "data_size": 63488 00:24:08.514 }, 00:24:08.514 { 00:24:08.514 "name": "pt2", 00:24:08.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:08.514 "is_configured": true, 00:24:08.514 "data_offset": 2048, 00:24:08.514 "data_size": 63488 00:24:08.514 }, 00:24:08.514 { 00:24:08.514 "name": "pt3", 00:24:08.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:08.514 "is_configured": true, 00:24:08.514 "data_offset": 2048, 00:24:08.514 "data_size": 63488 00:24:08.514 }, 00:24:08.514 { 00:24:08.514 "name": "pt4", 00:24:08.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:08.514 "is_configured": true, 00:24:08.514 "data_offset": 2048, 00:24:08.514 "data_size": 63488 00:24:08.514 } 00:24:08.514 ] 00:24:08.514 }' 00:24:08.514 10:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.514 10:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.076 10:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:09.331 [2024-07-26 10:34:21.983523] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:09.331 [2024-07-26 10:34:21.983547] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.331 [2024-07-26 10:34:21.983596] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.331 [2024-07-26 10:34:21.983668] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.331 [2024-07-26 10:34:21.983684] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14a9220 name raid_bdev1, state offline 00:24:09.331 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.331 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:09.587 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:09.844 [2024-07-26 10:34:22.657262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:09.844 [2024-07-26 10:34:22.657306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.844 [2024-07-26 10:34:22.657323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136ae10 00:24:09.844 [2024-07-26 10:34:22.657335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.844 [2024-07-26 10:34:22.658830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.844 [2024-07-26 10:34:22.658857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:09.844 [2024-07-26 10:34:22.658917] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:09.844 [2024-07-26 10:34:22.658940] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:09.844 [2024-07-26 10:34:22.659029] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:09.844 [2024-07-26 10:34:22.659041] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:09.844 [2024-07-26 10:34:22.659053] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14a7f30 name raid_bdev1, state configuring 00:24:09.844 [2024-07-26 10:34:22.659073] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:09.844 [2024-07-26 10:34:22.659156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:09.844 pt1 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.844 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.101 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.101 "name": "raid_bdev1", 00:24:10.101 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:10.101 "strip_size_kb": 0, 00:24:10.101 "state": "configuring", 00:24:10.101 "raid_level": "raid1", 00:24:10.101 "superblock": true, 00:24:10.101 "num_base_bdevs": 4, 00:24:10.101 "num_base_bdevs_discovered": 2, 00:24:10.101 "num_base_bdevs_operational": 3, 00:24:10.101 "base_bdevs_list": [ 00:24:10.101 { 00:24:10.101 "name": null, 00:24:10.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.101 "is_configured": false, 00:24:10.101 "data_offset": 2048, 00:24:10.101 "data_size": 63488 00:24:10.101 }, 00:24:10.101 { 00:24:10.101 "name": "pt2", 00:24:10.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:10.101 "is_configured": true, 00:24:10.101 "data_offset": 2048, 00:24:10.101 "data_size": 63488 00:24:10.101 }, 00:24:10.101 { 00:24:10.101 "name": "pt3", 00:24:10.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:10.101 "is_configured": true, 00:24:10.101 "data_offset": 2048, 00:24:10.101 "data_size": 63488 00:24:10.101 }, 00:24:10.101 { 00:24:10.101 "name": null, 00:24:10.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:10.101 "is_configured": false, 00:24:10.101 "data_offset": 2048, 00:24:10.101 "data_size": 63488 00:24:10.101 } 00:24:10.101 ] 00:24:10.101 }' 00:24:10.101 10:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.101 10:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.666 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:24:10.666 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:10.924 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:24:10.924 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:11.181 [2024-07-26 10:34:23.900549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:11.181 [2024-07-26 10:34:23.900597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.181 [2024-07-26 10:34:23.900615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b9910 00:24:11.181 [2024-07-26 10:34:23.900627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.181 [2024-07-26 10:34:23.900931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.181 [2024-07-26 10:34:23.900947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:11.181 [2024-07-26 10:34:23.901000] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:11.181 [2024-07-26 10:34:23.901018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:11.181 [2024-07-26 10:34:23.901123] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x14aafc0 00:24:11.181 [2024-07-26 10:34:23.901133] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:11.181 [2024-07-26 10:34:23.901304] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14a8370 00:24:11.181 [2024-07-26 10:34:23.901423] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14aafc0 00:24:11.181 [2024-07-26 10:34:23.901432] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14aafc0 00:24:11.181 [2024-07-26 10:34:23.901521] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.181 pt4 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.181 10:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.438 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.438 "name": "raid_bdev1", 00:24:11.438 "uuid": "a86a9171-4d78-48c1-ad97-96d4d8bc4be5", 00:24:11.438 "strip_size_kb": 0, 00:24:11.438 "state": "online", 00:24:11.438 "raid_level": "raid1", 00:24:11.438 "superblock": true, 00:24:11.438 "num_base_bdevs": 4, 00:24:11.438 "num_base_bdevs_discovered": 3, 00:24:11.438 "num_base_bdevs_operational": 3, 00:24:11.438 "base_bdevs_list": [ 00:24:11.438 { 00:24:11.438 "name": null, 00:24:11.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.438 "is_configured": false, 00:24:11.438 "data_offset": 2048, 00:24:11.438 "data_size": 63488 00:24:11.438 }, 00:24:11.438 { 00:24:11.438 "name": "pt2", 00:24:11.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:11.438 "is_configured": true, 00:24:11.438 "data_offset": 2048, 00:24:11.438 "data_size": 63488 00:24:11.438 }, 00:24:11.438 { 00:24:11.438 "name": "pt3", 00:24:11.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:11.438 "is_configured": true, 00:24:11.438 "data_offset": 2048, 00:24:11.438 "data_size": 63488 00:24:11.438 }, 00:24:11.438 { 00:24:11.438 "name": "pt4", 00:24:11.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:11.438 "is_configured": true, 00:24:11.438 "data_offset": 2048, 00:24:11.438 "data_size": 63488 00:24:11.438 } 00:24:11.438 ] 00:24:11.438 }' 00:24:11.438 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.438 10:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.002 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:24:12.002 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:12.259 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:24:12.259 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:12.259 10:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:24:12.516 [2024-07-26 10:34:25.168222] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.516 10:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' a86a9171-4d78-48c1-ad97-96d4d8bc4be5 '!=' a86a9171-4d78-48c1-ad97-96d4d8bc4be5 ']' 00:24:12.516 10:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3465568 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3465568 ']' 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3465568 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3465568 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3465568' 00:24:12.517 killing process with pid 3465568 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3465568 00:24:12.517 [2024-07-26 10:34:25.232226] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:12.517 [2024-07-26 10:34:25.232274] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.517 [2024-07-26 10:34:25.232337] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.517 [2024-07-26 10:34:25.232348] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14aafc0 name raid_bdev1, state offline 00:24:12.517 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3465568 00:24:12.517 [2024-07-26 10:34:25.262659] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:12.774 10:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:24:12.774 00:24:12.774 real 0m23.711s 00:24:12.774 user 0m43.315s 00:24:12.774 sys 0m4.390s 00:24:12.774 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:12.774 10:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.774 ************************************ 00:24:12.774 END TEST raid_superblock_test 00:24:12.774 ************************************ 00:24:12.774 10:34:25 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:24:12.774 10:34:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:12.774 10:34:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:12.774 10:34:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.774 ************************************ 00:24:12.774 START TEST raid_read_error_test 00:24:12.774 ************************************ 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.qq5eo1f71s 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3470066 00:24:12.774 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3470066 /var/tmp/spdk-raid.sock 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3470066 ']' 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.775 10:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.775 [2024-07-26 10:34:25.587903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:24:12.775 [2024-07-26 10:34:25.587957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470066 ] 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:12.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.775 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:13.032 [2024-07-26 10:34:25.723270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.032 [2024-07-26 10:34:25.767893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.032 [2024-07-26 10:34:25.827988] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.032 [2024-07-26 10:34:25.828024] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.625 10:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.625 10:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:13.625 10:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:13.625 10:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:13.886 BaseBdev1_malloc 00:24:13.886 10:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:14.143 true 00:24:14.143 10:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:14.401 [2024-07-26 10:34:27.179185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:14.401 [2024-07-26 10:34:27.179225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.401 [2024-07-26 10:34:27.179243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb8d7c0 00:24:14.401 [2024-07-26 10:34:27.179254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.401 [2024-07-26 10:34:27.180812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.401 [2024-07-26 10:34:27.180838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:14.401 BaseBdev1 00:24:14.401 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:14.401 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:14.658 BaseBdev2_malloc 00:24:14.658 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:14.915 true 00:24:14.915 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:15.172 [2024-07-26 10:34:27.845389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:15.172 [2024-07-26 10:34:27.845427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.172 [2024-07-26 10:34:27.845447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb34960 00:24:15.172 [2024-07-26 10:34:27.845458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.172 [2024-07-26 10:34:27.846783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.172 [2024-07-26 10:34:27.846810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:15.172 BaseBdev2 00:24:15.172 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:15.172 10:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:15.429 BaseBdev3_malloc 00:24:15.429 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:15.429 true 00:24:15.429 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:15.686 [2024-07-26 10:34:28.527427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:15.686 [2024-07-26 10:34:28.527466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.686 [2024-07-26 10:34:28.527483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb37720 00:24:15.686 [2024-07-26 10:34:28.527494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.686 [2024-07-26 10:34:28.528844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.686 [2024-07-26 10:34:28.528870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:15.686 BaseBdev3 00:24:15.686 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:15.686 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:15.943 BaseBdev4_malloc 00:24:15.943 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:16.201 true 00:24:16.201 10:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:16.459 [2024-07-26 10:34:29.201553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:16.459 [2024-07-26 10:34:29.201593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.459 [2024-07-26 10:34:29.201611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb368b0 00:24:16.459 [2024-07-26 10:34:29.201623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.459 [2024-07-26 10:34:29.202984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.459 [2024-07-26 10:34:29.203010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:16.459 BaseBdev4 00:24:16.459 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:16.717 [2024-07-26 10:34:29.430176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:16.717 [2024-07-26 10:34:29.431326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.717 [2024-07-26 10:34:29.431386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:16.717 [2024-07-26 10:34:29.431443] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:16.717 [2024-07-26 10:34:29.431635] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb39080 00:24:16.717 [2024-07-26 10:34:29.431646] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:16.717 [2024-07-26 10:34:29.431827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3e6a0 00:24:16.717 [2024-07-26 10:34:29.431963] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb39080 00:24:16.717 [2024-07-26 10:34:29.431976] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb39080 00:24:16.717 [2024-07-26 10:34:29.432084] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.717 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.975 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.975 "name": "raid_bdev1", 00:24:16.975 "uuid": "1c9b11c8-5220-4de8-a935-5f3fd2c3b631", 00:24:16.975 "strip_size_kb": 0, 00:24:16.975 "state": "online", 00:24:16.975 "raid_level": "raid1", 00:24:16.975 "superblock": true, 00:24:16.975 "num_base_bdevs": 4, 00:24:16.975 "num_base_bdevs_discovered": 4, 00:24:16.975 "num_base_bdevs_operational": 4, 00:24:16.975 "base_bdevs_list": [ 00:24:16.975 { 00:24:16.975 "name": "BaseBdev1", 00:24:16.975 "uuid": "c9154955-522a-5348-8b84-413039b6f540", 00:24:16.975 "is_configured": true, 00:24:16.975 "data_offset": 2048, 00:24:16.975 "data_size": 63488 00:24:16.975 }, 00:24:16.975 { 00:24:16.975 "name": "BaseBdev2", 00:24:16.975 "uuid": "c2427a89-c9ae-5b98-8e6d-ded036877676", 00:24:16.975 "is_configured": true, 00:24:16.975 "data_offset": 2048, 00:24:16.975 "data_size": 63488 00:24:16.975 }, 00:24:16.975 { 00:24:16.975 "name": "BaseBdev3", 00:24:16.975 "uuid": "f680a510-9c26-5870-aea9-250ba69a08a2", 00:24:16.975 "is_configured": true, 00:24:16.975 "data_offset": 2048, 00:24:16.975 "data_size": 63488 00:24:16.975 }, 00:24:16.975 { 00:24:16.975 "name": "BaseBdev4", 00:24:16.975 "uuid": "ac816401-3f32-5147-b5eb-c1d607b9f789", 00:24:16.975 "is_configured": true, 00:24:16.975 "data_offset": 2048, 00:24:16.975 "data_size": 63488 00:24:16.975 } 00:24:16.975 ] 00:24:16.975 }' 00:24:16.975 10:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.975 10:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.541 10:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:17.541 10:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:17.541 [2024-07-26 10:34:30.332783] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3cb70 00:24:18.474 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.732 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.990 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.990 "name": "raid_bdev1", 00:24:18.990 "uuid": "1c9b11c8-5220-4de8-a935-5f3fd2c3b631", 00:24:18.990 "strip_size_kb": 0, 00:24:18.990 "state": "online", 00:24:18.990 "raid_level": "raid1", 00:24:18.990 "superblock": true, 00:24:18.990 "num_base_bdevs": 4, 00:24:18.990 "num_base_bdevs_discovered": 4, 00:24:18.990 "num_base_bdevs_operational": 4, 00:24:18.990 "base_bdevs_list": [ 00:24:18.990 { 00:24:18.990 "name": "BaseBdev1", 00:24:18.991 "uuid": "c9154955-522a-5348-8b84-413039b6f540", 00:24:18.991 "is_configured": true, 00:24:18.991 "data_offset": 2048, 00:24:18.991 "data_size": 63488 00:24:18.991 }, 00:24:18.991 { 00:24:18.991 "name": "BaseBdev2", 00:24:18.991 "uuid": "c2427a89-c9ae-5b98-8e6d-ded036877676", 00:24:18.991 "is_configured": true, 00:24:18.991 "data_offset": 2048, 00:24:18.991 "data_size": 63488 00:24:18.991 }, 00:24:18.991 { 00:24:18.991 "name": "BaseBdev3", 00:24:18.991 "uuid": "f680a510-9c26-5870-aea9-250ba69a08a2", 00:24:18.991 "is_configured": true, 00:24:18.991 "data_offset": 2048, 00:24:18.991 "data_size": 63488 00:24:18.991 }, 00:24:18.991 { 00:24:18.991 "name": "BaseBdev4", 00:24:18.991 "uuid": "ac816401-3f32-5147-b5eb-c1d607b9f789", 00:24:18.991 "is_configured": true, 00:24:18.991 "data_offset": 2048, 00:24:18.991 "data_size": 63488 00:24:18.991 } 00:24:18.991 ] 00:24:18.991 }' 00:24:18.991 10:34:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.991 10:34:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.556 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:19.814 [2024-07-26 10:34:32.516214] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:19.814 [2024-07-26 10:34:32.516249] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:19.814 [2024-07-26 10:34:32.519147] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.814 [2024-07-26 10:34:32.519183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.814 [2024-07-26 10:34:32.519291] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.814 [2024-07-26 10:34:32.519302] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb39080 name raid_bdev1, state offline 00:24:19.814 0 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3470066 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3470066 ']' 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3470066 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3470066 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3470066' 00:24:19.814 killing process with pid 3470066 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3470066 00:24:19.814 [2024-07-26 10:34:32.622595] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.814 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3470066 00:24:19.814 [2024-07-26 10:34:32.648385] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.qq5eo1f71s 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:20.073 00:24:20.073 real 0m7.326s 00:24:20.073 user 0m11.689s 00:24:20.073 sys 0m1.313s 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.073 10:34:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.073 ************************************ 00:24:20.073 END TEST raid_read_error_test 00:24:20.073 ************************************ 00:24:20.073 10:34:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:24:20.073 10:34:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:20.073 10:34:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.073 10:34:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.073 ************************************ 00:24:20.073 START TEST raid_write_error_test 00:24:20.073 ************************************ 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.5ki1Hx4FEO 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3471425 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3471425 /var/tmp/spdk-raid.sock 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3471425 ']' 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.073 10:34:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.331 [2024-07-26 10:34:33.048797] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:24:20.331 [2024-07-26 10:34:33.048928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471425 ] 00:24:20.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.331 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:20.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.331 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:20.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.331 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:20.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.331 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:20.331 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.331 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:20.332 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:20.332 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:20.590 [2024-07-26 10:34:33.261750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.590 [2024-07-26 10:34:33.305459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.590 [2024-07-26 10:34:33.363957] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.590 [2024-07-26 10:34:33.364006] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.157 10:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.157 10:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:21.157 10:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:21.157 10:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:21.415 BaseBdev1_malloc 00:24:21.415 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:21.673 true 00:24:21.673 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:21.673 [2024-07-26 10:34:34.543220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:21.673 [2024-07-26 10:34:34.543260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.673 [2024-07-26 10:34:34.543277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28b17c0 00:24:21.673 [2024-07-26 10:34:34.543289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.673 [2024-07-26 10:34:34.544887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.673 [2024-07-26 10:34:34.544914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:21.673 BaseBdev1 00:24:21.673 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:21.673 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:21.930 BaseBdev2_malloc 00:24:21.930 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:22.188 true 00:24:22.188 10:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:22.445 [2024-07-26 10:34:35.165124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:22.445 [2024-07-26 10:34:35.165171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.445 [2024-07-26 10:34:35.165190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2858960 00:24:22.445 [2024-07-26 10:34:35.165202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.445 [2024-07-26 10:34:35.166566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.445 [2024-07-26 10:34:35.166592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:22.446 BaseBdev2 00:24:22.446 10:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:22.446 10:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:22.703 BaseBdev3_malloc 00:24:22.703 10:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:22.961 true 00:24:22.961 10:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:23.218 [2024-07-26 10:34:36.119818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:23.218 [2024-07-26 10:34:36.119861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.218 [2024-07-26 10:34:36.119880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285b720 00:24:23.218 [2024-07-26 10:34:36.119891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.476 [2024-07-26 10:34:36.121271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.476 [2024-07-26 10:34:36.121297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:23.476 BaseBdev3 00:24:23.476 10:34:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:23.476 10:34:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:23.476 BaseBdev4_malloc 00:24:23.733 10:34:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:23.733 true 00:24:23.733 10:34:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:23.991 [2024-07-26 10:34:36.817926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:23.991 [2024-07-26 10:34:36.817967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.991 [2024-07-26 10:34:36.817986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285a8b0 00:24:23.991 [2024-07-26 10:34:36.817997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.991 [2024-07-26 10:34:36.819353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.991 [2024-07-26 10:34:36.819379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:23.991 BaseBdev4 00:24:23.991 10:34:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:24.249 [2024-07-26 10:34:37.042594] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.249 [2024-07-26 10:34:37.043744] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:24.249 [2024-07-26 10:34:37.043804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:24.249 [2024-07-26 10:34:37.043862] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:24.249 [2024-07-26 10:34:37.044051] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x285d080 00:24:24.249 [2024-07-26 10:34:37.044061] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:24.249 [2024-07-26 10:34:37.044251] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28626a0 00:24:24.249 [2024-07-26 10:34:37.044386] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x285d080 00:24:24.249 [2024-07-26 10:34:37.044395] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x285d080 00:24:24.249 [2024-07-26 10:34:37.044503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.249 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.507 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.507 "name": "raid_bdev1", 00:24:24.507 "uuid": "5cc9be22-67e2-4f7a-a5ee-544d94e5654f", 00:24:24.507 "strip_size_kb": 0, 00:24:24.507 "state": "online", 00:24:24.507 "raid_level": "raid1", 00:24:24.507 "superblock": true, 00:24:24.507 "num_base_bdevs": 4, 00:24:24.507 "num_base_bdevs_discovered": 4, 00:24:24.507 "num_base_bdevs_operational": 4, 00:24:24.507 "base_bdevs_list": [ 00:24:24.507 { 00:24:24.507 "name": "BaseBdev1", 00:24:24.507 "uuid": "ed41ebb9-5f35-5530-a685-7f6ad47e6f41", 00:24:24.507 "is_configured": true, 00:24:24.507 "data_offset": 2048, 00:24:24.507 "data_size": 63488 00:24:24.507 }, 00:24:24.507 { 00:24:24.507 "name": "BaseBdev2", 00:24:24.507 "uuid": "4376e39a-34e8-50c7-b2cc-a0a3cc67b9ff", 00:24:24.507 "is_configured": true, 00:24:24.507 "data_offset": 2048, 00:24:24.507 "data_size": 63488 00:24:24.507 }, 00:24:24.507 { 00:24:24.507 "name": "BaseBdev3", 00:24:24.507 "uuid": "37fe0b2a-1548-580c-94f9-e4738c83fbbf", 00:24:24.507 "is_configured": true, 00:24:24.507 "data_offset": 2048, 00:24:24.507 "data_size": 63488 00:24:24.507 }, 00:24:24.507 { 00:24:24.507 "name": "BaseBdev4", 00:24:24.507 "uuid": "6fba30da-f68f-5046-b981-eff1ea15ff68", 00:24:24.507 "is_configured": true, 00:24:24.507 "data_offset": 2048, 00:24:24.507 "data_size": 63488 00:24:24.507 } 00:24:24.507 ] 00:24:24.507 }' 00:24:24.507 10:34:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.507 10:34:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.443 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:25.443 10:34:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:25.443 [2024-07-26 10:34:38.245985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2860b70 00:24:26.407 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:26.666 [2024-07-26 10:34:39.370055] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:26.666 [2024-07-26 10:34:39.370106] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:26.666 [2024-07-26 10:34:39.370311] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2860b70 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.666 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.925 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.925 "name": "raid_bdev1", 00:24:26.925 "uuid": "5cc9be22-67e2-4f7a-a5ee-544d94e5654f", 00:24:26.925 "strip_size_kb": 0, 00:24:26.925 "state": "online", 00:24:26.925 "raid_level": "raid1", 00:24:26.925 "superblock": true, 00:24:26.925 "num_base_bdevs": 4, 00:24:26.925 "num_base_bdevs_discovered": 3, 00:24:26.925 "num_base_bdevs_operational": 3, 00:24:26.925 "base_bdevs_list": [ 00:24:26.925 { 00:24:26.925 "name": null, 00:24:26.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.925 "is_configured": false, 00:24:26.925 "data_offset": 2048, 00:24:26.925 "data_size": 63488 00:24:26.925 }, 00:24:26.925 { 00:24:26.925 "name": "BaseBdev2", 00:24:26.925 "uuid": "4376e39a-34e8-50c7-b2cc-a0a3cc67b9ff", 00:24:26.925 "is_configured": true, 00:24:26.925 "data_offset": 2048, 00:24:26.925 "data_size": 63488 00:24:26.925 }, 00:24:26.925 { 00:24:26.925 "name": "BaseBdev3", 00:24:26.925 "uuid": "37fe0b2a-1548-580c-94f9-e4738c83fbbf", 00:24:26.925 "is_configured": true, 00:24:26.925 "data_offset": 2048, 00:24:26.925 "data_size": 63488 00:24:26.925 }, 00:24:26.925 { 00:24:26.925 "name": "BaseBdev4", 00:24:26.925 "uuid": "6fba30da-f68f-5046-b981-eff1ea15ff68", 00:24:26.925 "is_configured": true, 00:24:26.925 "data_offset": 2048, 00:24:26.925 "data_size": 63488 00:24:26.925 } 00:24:26.925 ] 00:24:26.925 }' 00:24:26.925 10:34:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.925 10:34:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.492 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:27.750 [2024-07-26 10:34:40.395566] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:27.750 [2024-07-26 10:34:40.395598] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.750 [2024-07-26 10:34:40.398638] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.750 [2024-07-26 10:34:40.398669] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.750 [2024-07-26 10:34:40.398755] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:27.750 [2024-07-26 10:34:40.398772] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x285d080 name raid_bdev1, state offline 00:24:27.750 0 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3471425 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3471425 ']' 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3471425 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471425 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471425' 00:24:27.750 killing process with pid 3471425 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3471425 00:24:27.750 [2024-07-26 10:34:40.471311] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:27.750 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3471425 00:24:27.750 [2024-07-26 10:34:40.498023] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.5ki1Hx4FEO 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:28.009 00:24:28.009 real 0m7.765s 00:24:28.009 user 0m12.403s 00:24:28.009 sys 0m1.408s 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.009 10:34:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:28.009 ************************************ 00:24:28.009 END TEST raid_write_error_test 00:24:28.009 ************************************ 00:24:28.009 10:34:40 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:24:28.009 10:34:40 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:24:28.009 10:34:40 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:24:28.009 10:34:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:28.009 10:34:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.009 10:34:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.009 ************************************ 00:24:28.009 START TEST raid_rebuild_test 00:24:28.009 ************************************ 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=3472795 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 3472795 /var/tmp/spdk-raid.sock 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 3472795 ']' 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:28.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:28.009 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.010 10:34:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:28.010 [2024-07-26 10:34:40.846680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:24:28.010 [2024-07-26 10:34:40.846739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472795 ] 00:24:28.010 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:28.010 Zero copy mechanism will not be used. 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.268 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.268 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.268 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.268 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.268 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:28.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:28.269 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:28.269 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:28.269 [2024-07-26 10:34:40.981813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.269 [2024-07-26 10:34:41.025580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.269 [2024-07-26 10:34:41.086046] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:28.269 [2024-07-26 10:34:41.086083] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.204 10:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.204 10:34:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:24:29.204 10:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:29.204 10:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:29.204 BaseBdev1_malloc 00:24:29.204 10:34:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:29.462 [2024-07-26 10:34:42.184443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:29.462 [2024-07-26 10:34:42.184487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.462 [2024-07-26 10:34:42.184507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20aa370 00:24:29.462 [2024-07-26 10:34:42.184519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.462 [2024-07-26 10:34:42.185938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.462 [2024-07-26 10:34:42.185969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:29.462 BaseBdev1 00:24:29.462 10:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:29.462 10:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:29.721 BaseBdev2_malloc 00:24:29.721 10:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:29.979 [2024-07-26 10:34:42.646047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:29.979 [2024-07-26 10:34:42.646090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.979 [2024-07-26 10:34:42.646108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20660d0 00:24:29.979 [2024-07-26 10:34:42.646120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.979 [2024-07-26 10:34:42.647511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.979 [2024-07-26 10:34:42.647538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:29.979 BaseBdev2 00:24:29.979 10:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:30.237 spare_malloc 00:24:30.237 10:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:30.237 spare_delay 00:24:30.237 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:30.494 [2024-07-26 10:34:43.332085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:30.494 [2024-07-26 10:34:43.332121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.494 [2024-07-26 10:34:43.332137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2054070 00:24:30.494 [2024-07-26 10:34:43.332156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.494 [2024-07-26 10:34:43.333409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.494 [2024-07-26 10:34:43.333434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:30.494 spare 00:24:30.494 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:30.751 [2024-07-26 10:34:43.552675] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.751 [2024-07-26 10:34:43.553715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.751 [2024-07-26 10:34:43.553780] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2055250 00:24:30.751 [2024-07-26 10:34:43.553790] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:30.751 [2024-07-26 10:34:43.553957] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f0f4f0 00:24:30.751 [2024-07-26 10:34:43.554068] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2055250 00:24:30.751 [2024-07-26 10:34:43.554077] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2055250 00:24:30.751 [2024-07-26 10:34:43.554175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.751 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.752 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.752 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.752 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.010 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.010 "name": "raid_bdev1", 00:24:31.010 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:31.010 "strip_size_kb": 0, 00:24:31.010 "state": "online", 00:24:31.010 "raid_level": "raid1", 00:24:31.010 "superblock": false, 00:24:31.010 "num_base_bdevs": 2, 00:24:31.010 "num_base_bdevs_discovered": 2, 00:24:31.010 "num_base_bdevs_operational": 2, 00:24:31.010 "base_bdevs_list": [ 00:24:31.010 { 00:24:31.010 "name": "BaseBdev1", 00:24:31.010 "uuid": "09a228f8-dded-52ec-b93c-750f2acd3d4e", 00:24:31.010 "is_configured": true, 00:24:31.010 "data_offset": 0, 00:24:31.010 "data_size": 65536 00:24:31.010 }, 00:24:31.010 { 00:24:31.010 "name": "BaseBdev2", 00:24:31.010 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:31.010 "is_configured": true, 00:24:31.010 "data_offset": 0, 00:24:31.010 "data_size": 65536 00:24:31.010 } 00:24:31.010 ] 00:24:31.010 }' 00:24:31.010 10:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.010 10:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.576 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:31.576 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:31.834 [2024-07-26 10:34:44.523496] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.834 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:24:31.834 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.834 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.093 10:34:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:32.093 [2024-07-26 10:34:44.980516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2051900 00:24:32.093 /dev/nbd0 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:32.350 1+0 records in 00:24:32.350 1+0 records out 00:24:32.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261704 s, 15.7 MB/s 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:32.350 10:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:36.532 65536+0 records in 00:24:36.532 65536+0 records out 00:24:36.532 33554432 bytes (34 MB, 32 MiB) copied, 4.21302 s, 8.0 MB/s 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.532 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:36.791 [2024-07-26 10:34:49.510765] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:36.791 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:37.049 [2024-07-26 10:34:49.735466] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.049 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.308 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.308 "name": "raid_bdev1", 00:24:37.308 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:37.308 "strip_size_kb": 0, 00:24:37.308 "state": "online", 00:24:37.308 "raid_level": "raid1", 00:24:37.308 "superblock": false, 00:24:37.308 "num_base_bdevs": 2, 00:24:37.308 "num_base_bdevs_discovered": 1, 00:24:37.308 "num_base_bdevs_operational": 1, 00:24:37.308 "base_bdevs_list": [ 00:24:37.308 { 00:24:37.308 "name": null, 00:24:37.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.308 "is_configured": false, 00:24:37.308 "data_offset": 0, 00:24:37.308 "data_size": 65536 00:24:37.308 }, 00:24:37.308 { 00:24:37.308 "name": "BaseBdev2", 00:24:37.308 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:37.308 "is_configured": true, 00:24:37.308 "data_offset": 0, 00:24:37.308 "data_size": 65536 00:24:37.308 } 00:24:37.308 ] 00:24:37.308 }' 00:24:37.308 10:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.308 10:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.874 10:34:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.133 [2024-07-26 10:34:50.778247] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.133 [2024-07-26 10:34:50.782893] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2051900 00:24:38.133 [2024-07-26 10:34:50.784889] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:38.133 10:34:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.068 10:34:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:39.327 "name": "raid_bdev1", 00:24:39.327 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:39.327 "strip_size_kb": 0, 00:24:39.327 "state": "online", 00:24:39.327 "raid_level": "raid1", 00:24:39.327 "superblock": false, 00:24:39.327 "num_base_bdevs": 2, 00:24:39.327 "num_base_bdevs_discovered": 2, 00:24:39.327 "num_base_bdevs_operational": 2, 00:24:39.327 "process": { 00:24:39.327 "type": "rebuild", 00:24:39.327 "target": "spare", 00:24:39.327 "progress": { 00:24:39.327 "blocks": 24576, 00:24:39.327 "percent": 37 00:24:39.327 } 00:24:39.327 }, 00:24:39.327 "base_bdevs_list": [ 00:24:39.327 { 00:24:39.327 "name": "spare", 00:24:39.327 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:39.327 "is_configured": true, 00:24:39.327 "data_offset": 0, 00:24:39.327 "data_size": 65536 00:24:39.327 }, 00:24:39.327 { 00:24:39.327 "name": "BaseBdev2", 00:24:39.327 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:39.327 "is_configured": true, 00:24:39.327 "data_offset": 0, 00:24:39.327 "data_size": 65536 00:24:39.327 } 00:24:39.327 ] 00:24:39.327 }' 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.327 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:39.586 [2024-07-26 10:34:52.326893] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.586 [2024-07-26 10:34:52.396780] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:39.586 [2024-07-26 10:34:52.396823] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.586 [2024-07-26 10:34:52.396837] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:39.586 [2024-07-26 10:34:52.396844] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.586 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.857 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.857 "name": "raid_bdev1", 00:24:39.857 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:39.857 "strip_size_kb": 0, 00:24:39.857 "state": "online", 00:24:39.857 "raid_level": "raid1", 00:24:39.857 "superblock": false, 00:24:39.857 "num_base_bdevs": 2, 00:24:39.857 "num_base_bdevs_discovered": 1, 00:24:39.857 "num_base_bdevs_operational": 1, 00:24:39.857 "base_bdevs_list": [ 00:24:39.857 { 00:24:39.857 "name": null, 00:24:39.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.857 "is_configured": false, 00:24:39.857 "data_offset": 0, 00:24:39.857 "data_size": 65536 00:24:39.857 }, 00:24:39.857 { 00:24:39.857 "name": "BaseBdev2", 00:24:39.857 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:39.857 "is_configured": true, 00:24:39.857 "data_offset": 0, 00:24:39.857 "data_size": 65536 00:24:39.857 } 00:24:39.857 ] 00:24:39.857 }' 00:24:39.857 10:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.857 10:34:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.441 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:40.705 "name": "raid_bdev1", 00:24:40.705 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:40.705 "strip_size_kb": 0, 00:24:40.705 "state": "online", 00:24:40.705 "raid_level": "raid1", 00:24:40.705 "superblock": false, 00:24:40.705 "num_base_bdevs": 2, 00:24:40.705 "num_base_bdevs_discovered": 1, 00:24:40.705 "num_base_bdevs_operational": 1, 00:24:40.705 "base_bdevs_list": [ 00:24:40.705 { 00:24:40.705 "name": null, 00:24:40.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.705 "is_configured": false, 00:24:40.705 "data_offset": 0, 00:24:40.705 "data_size": 65536 00:24:40.705 }, 00:24:40.705 { 00:24:40.705 "name": "BaseBdev2", 00:24:40.705 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:40.705 "is_configured": true, 00:24:40.705 "data_offset": 0, 00:24:40.705 "data_size": 65536 00:24:40.705 } 00:24:40.705 ] 00:24:40.705 }' 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:40.705 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.962 [2024-07-26 10:34:53.696318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.962 [2024-07-26 10:34:53.701036] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2051900 00:24:40.962 [2024-07-26 10:34:53.702391] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:40.962 10:34:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.895 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.154 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:42.154 "name": "raid_bdev1", 00:24:42.154 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:42.154 "strip_size_kb": 0, 00:24:42.154 "state": "online", 00:24:42.154 "raid_level": "raid1", 00:24:42.154 "superblock": false, 00:24:42.154 "num_base_bdevs": 2, 00:24:42.154 "num_base_bdevs_discovered": 2, 00:24:42.154 "num_base_bdevs_operational": 2, 00:24:42.154 "process": { 00:24:42.154 "type": "rebuild", 00:24:42.154 "target": "spare", 00:24:42.154 "progress": { 00:24:42.154 "blocks": 24576, 00:24:42.154 "percent": 37 00:24:42.154 } 00:24:42.154 }, 00:24:42.154 "base_bdevs_list": [ 00:24:42.154 { 00:24:42.154 "name": "spare", 00:24:42.154 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:42.154 "is_configured": true, 00:24:42.154 "data_offset": 0, 00:24:42.154 "data_size": 65536 00:24:42.154 }, 00:24:42.154 { 00:24:42.154 "name": "BaseBdev2", 00:24:42.154 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:42.154 "is_configured": true, 00:24:42.154 "data_offset": 0, 00:24:42.154 "data_size": 65536 00:24:42.154 } 00:24:42.154 ] 00:24:42.154 }' 00:24:42.154 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.154 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.154 10:34:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=739 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.154 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.413 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:42.413 "name": "raid_bdev1", 00:24:42.413 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:42.413 "strip_size_kb": 0, 00:24:42.413 "state": "online", 00:24:42.413 "raid_level": "raid1", 00:24:42.413 "superblock": false, 00:24:42.413 "num_base_bdevs": 2, 00:24:42.413 "num_base_bdevs_discovered": 2, 00:24:42.413 "num_base_bdevs_operational": 2, 00:24:42.413 "process": { 00:24:42.413 "type": "rebuild", 00:24:42.413 "target": "spare", 00:24:42.413 "progress": { 00:24:42.413 "blocks": 30720, 00:24:42.413 "percent": 46 00:24:42.413 } 00:24:42.413 }, 00:24:42.413 "base_bdevs_list": [ 00:24:42.413 { 00:24:42.413 "name": "spare", 00:24:42.413 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:42.413 "is_configured": true, 00:24:42.413 "data_offset": 0, 00:24:42.413 "data_size": 65536 00:24:42.413 }, 00:24:42.413 { 00:24:42.413 "name": "BaseBdev2", 00:24:42.413 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:42.413 "is_configured": true, 00:24:42.413 "data_offset": 0, 00:24:42.413 "data_size": 65536 00:24:42.413 } 00:24:42.413 ] 00:24:42.413 }' 00:24:42.413 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.413 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.413 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.672 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.672 10:34:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.607 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.866 "name": "raid_bdev1", 00:24:43.866 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:43.866 "strip_size_kb": 0, 00:24:43.866 "state": "online", 00:24:43.866 "raid_level": "raid1", 00:24:43.866 "superblock": false, 00:24:43.866 "num_base_bdevs": 2, 00:24:43.866 "num_base_bdevs_discovered": 2, 00:24:43.866 "num_base_bdevs_operational": 2, 00:24:43.866 "process": { 00:24:43.866 "type": "rebuild", 00:24:43.866 "target": "spare", 00:24:43.866 "progress": { 00:24:43.866 "blocks": 57344, 00:24:43.866 "percent": 87 00:24:43.866 } 00:24:43.866 }, 00:24:43.866 "base_bdevs_list": [ 00:24:43.866 { 00:24:43.866 "name": "spare", 00:24:43.866 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:43.866 "is_configured": true, 00:24:43.866 "data_offset": 0, 00:24:43.866 "data_size": 65536 00:24:43.866 }, 00:24:43.866 { 00:24:43.866 "name": "BaseBdev2", 00:24:43.866 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:43.866 "is_configured": true, 00:24:43.866 "data_offset": 0, 00:24:43.866 "data_size": 65536 00:24:43.866 } 00:24:43.866 ] 00:24:43.866 }' 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.866 10:34:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:44.124 [2024-07-26 10:34:56.925631] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:44.124 [2024-07-26 10:34:56.925686] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:44.124 [2024-07-26 10:34:56.925721] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.059 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:45.059 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.059 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.060 "name": "raid_bdev1", 00:24:45.060 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:45.060 "strip_size_kb": 0, 00:24:45.060 "state": "online", 00:24:45.060 "raid_level": "raid1", 00:24:45.060 "superblock": false, 00:24:45.060 "num_base_bdevs": 2, 00:24:45.060 "num_base_bdevs_discovered": 2, 00:24:45.060 "num_base_bdevs_operational": 2, 00:24:45.060 "base_bdevs_list": [ 00:24:45.060 { 00:24:45.060 "name": "spare", 00:24:45.060 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:45.060 "is_configured": true, 00:24:45.060 "data_offset": 0, 00:24:45.060 "data_size": 65536 00:24:45.060 }, 00:24:45.060 { 00:24:45.060 "name": "BaseBdev2", 00:24:45.060 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:45.060 "is_configured": true, 00:24:45.060 "data_offset": 0, 00:24:45.060 "data_size": 65536 00:24:45.060 } 00:24:45.060 ] 00:24:45.060 }' 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:45.060 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.319 10:34:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.319 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.319 "name": "raid_bdev1", 00:24:45.319 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:45.319 "strip_size_kb": 0, 00:24:45.319 "state": "online", 00:24:45.319 "raid_level": "raid1", 00:24:45.319 "superblock": false, 00:24:45.319 "num_base_bdevs": 2, 00:24:45.319 "num_base_bdevs_discovered": 2, 00:24:45.319 "num_base_bdevs_operational": 2, 00:24:45.319 "base_bdevs_list": [ 00:24:45.319 { 00:24:45.319 "name": "spare", 00:24:45.319 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:45.319 "is_configured": true, 00:24:45.319 "data_offset": 0, 00:24:45.319 "data_size": 65536 00:24:45.319 }, 00:24:45.319 { 00:24:45.319 "name": "BaseBdev2", 00:24:45.319 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:45.319 "is_configured": true, 00:24:45.319 "data_offset": 0, 00:24:45.319 "data_size": 65536 00:24:45.319 } 00:24:45.319 ] 00:24:45.319 }' 00:24:45.319 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.578 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.837 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.837 "name": "raid_bdev1", 00:24:45.837 "uuid": "ec669d0a-300a-4ca9-9d02-50218bea37e8", 00:24:45.837 "strip_size_kb": 0, 00:24:45.837 "state": "online", 00:24:45.837 "raid_level": "raid1", 00:24:45.837 "superblock": false, 00:24:45.837 "num_base_bdevs": 2, 00:24:45.837 "num_base_bdevs_discovered": 2, 00:24:45.837 "num_base_bdevs_operational": 2, 00:24:45.837 "base_bdevs_list": [ 00:24:45.837 { 00:24:45.837 "name": "spare", 00:24:45.837 "uuid": "52f71815-5752-57b9-9f76-f12081d7cc80", 00:24:45.837 "is_configured": true, 00:24:45.837 "data_offset": 0, 00:24:45.837 "data_size": 65536 00:24:45.837 }, 00:24:45.837 { 00:24:45.837 "name": "BaseBdev2", 00:24:45.837 "uuid": "bbe6d03d-2912-5ecc-b807-a3ff6698b5a9", 00:24:45.837 "is_configured": true, 00:24:45.837 "data_offset": 0, 00:24:45.837 "data_size": 65536 00:24:45.837 } 00:24:45.837 ] 00:24:45.837 }' 00:24:45.837 10:34:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.837 10:34:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.401 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:46.660 [2024-07-26 10:34:59.308296] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.660 [2024-07-26 10:34:59.308322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.660 [2024-07-26 10:34:59.308379] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.660 [2024-07-26 10:34:59.308431] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.660 [2024-07-26 10:34:59.308441] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2055250 name raid_bdev1, state offline 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:46.660 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:46.919 /dev/nbd0 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:46.919 1+0 records in 00:24:46.919 1+0 records out 00:24:46.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236286 s, 17.3 MB/s 00:24:46.919 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.178 10:34:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:47.178 /dev/nbd1 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:47.178 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.437 1+0 records in 00:24:47.437 1+0 records out 00:24:47.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304206 s, 13.5 MB/s 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.437 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.695 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 3472795 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 3472795 ']' 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 3472795 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3472795 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3472795' 00:24:47.954 killing process with pid 3472795 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 3472795 00:24:47.954 Received shutdown signal, test time was about 60.000000 seconds 00:24:47.954 00:24:47.954 Latency(us) 00:24:47.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.954 =================================================================================================================== 00:24:47.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.954 [2024-07-26 10:35:00.744571] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:47.954 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 3472795 00:24:47.954 [2024-07-26 10:35:00.768462] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.213 10:35:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:24:48.213 00:24:48.213 real 0m20.167s 00:24:48.213 user 0m27.726s 00:24:48.213 sys 0m4.102s 00:24:48.213 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.213 10:35:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.213 ************************************ 00:24:48.213 END TEST raid_rebuild_test 00:24:48.213 ************************************ 00:24:48.213 10:35:00 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:24:48.213 10:35:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:48.213 10:35:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.213 10:35:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:48.213 ************************************ 00:24:48.213 START TEST raid_rebuild_test_sb 00:24:48.213 ************************************ 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=3476319 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 3476319 /var/tmp/spdk-raid.sock 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:48.213 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3476319 ']' 00:24:48.214 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.214 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.214 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.214 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.214 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.214 [2024-07-26 10:35:01.101873] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:24:48.214 [2024-07-26 10:35:01.101932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476319 ] 00:24:48.214 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:48.214 Zero copy mechanism will not be used. 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:48.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.472 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:48.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:48.473 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:48.473 [2024-07-26 10:35:01.236790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.473 [2024-07-26 10:35:01.280324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.473 [2024-07-26 10:35:01.338520] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.473 [2024-07-26 10:35:01.338551] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.408 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.408 10:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:49.408 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:49.408 10:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:49.408 BaseBdev1_malloc 00:24:49.408 10:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:49.666 [2024-07-26 10:35:02.432099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:49.666 [2024-07-26 10:35:02.432149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.666 [2024-07-26 10:35:02.432168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f9370 00:24:49.666 [2024-07-26 10:35:02.432179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.666 [2024-07-26 10:35:02.433524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.666 [2024-07-26 10:35:02.433550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.666 BaseBdev1 00:24:49.666 10:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:49.666 10:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:49.925 BaseBdev2_malloc 00:24:49.925 10:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:50.183 [2024-07-26 10:35:02.885515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:50.183 [2024-07-26 10:35:02.885553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.183 [2024-07-26 10:35:02.885571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b50d0 00:24:50.183 [2024-07-26 10:35:02.885582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.183 [2024-07-26 10:35:02.886953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.183 [2024-07-26 10:35:02.886978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:50.183 BaseBdev2 00:24:50.183 10:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:50.442 spare_malloc 00:24:50.442 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:50.700 spare_delay 00:24:50.700 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:50.700 [2024-07-26 10:35:03.571528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:50.700 [2024-07-26 10:35:03.571565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.700 [2024-07-26 10:35:03.571582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a3070 00:24:50.700 [2024-07-26 10:35:03.571593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.700 [2024-07-26 10:35:03.572835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.700 [2024-07-26 10:35:03.572860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:50.700 spare 00:24:50.700 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:50.957 [2024-07-26 10:35:03.800250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:50.957 [2024-07-26 10:35:03.801418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.957 [2024-07-26 10:35:03.801547] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x26a4250 00:24:50.957 [2024-07-26 10:35:03.801559] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:50.957 [2024-07-26 10:35:03.801732] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x255e4f0 00:24:50.957 [2024-07-26 10:35:03.801852] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26a4250 00:24:50.957 [2024-07-26 10:35:03.801861] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26a4250 00:24:50.957 [2024-07-26 10:35:03.801954] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.957 10:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.214 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.215 "name": "raid_bdev1", 00:24:51.215 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:24:51.215 "strip_size_kb": 0, 00:24:51.215 "state": "online", 00:24:51.215 "raid_level": "raid1", 00:24:51.215 "superblock": true, 00:24:51.215 "num_base_bdevs": 2, 00:24:51.215 "num_base_bdevs_discovered": 2, 00:24:51.215 "num_base_bdevs_operational": 2, 00:24:51.215 "base_bdevs_list": [ 00:24:51.215 { 00:24:51.215 "name": "BaseBdev1", 00:24:51.215 "uuid": "ffee4764-9c71-592c-b487-9b052016d963", 00:24:51.215 "is_configured": true, 00:24:51.215 "data_offset": 2048, 00:24:51.215 "data_size": 63488 00:24:51.215 }, 00:24:51.215 { 00:24:51.215 "name": "BaseBdev2", 00:24:51.215 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:24:51.215 "is_configured": true, 00:24:51.215 "data_offset": 2048, 00:24:51.215 "data_size": 63488 00:24:51.215 } 00:24:51.215 ] 00:24:51.215 }' 00:24:51.215 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.215 10:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:51.781 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:52.040 [2024-07-26 10:35:04.835194] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.040 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:24:52.040 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.040 10:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.302 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:52.561 [2024-07-26 10:35:05.304239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:24:52.561 /dev/nbd0 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.561 1+0 records in 00:24:52.561 1+0 records out 00:24:52.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259118 s, 15.8 MB/s 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:52.561 10:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:57.825 63488+0 records in 00:24:57.825 63488+0 records out 00:24:57.825 32505856 bytes (33 MB, 31 MiB) copied, 4.74958 s, 6.8 MB/s 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:57.825 [2024-07-26 10:35:10.358960] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:57.825 [2024-07-26 10:35:10.579582] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.825 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.083 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.083 "name": "raid_bdev1", 00:24:58.083 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:24:58.083 "strip_size_kb": 0, 00:24:58.083 "state": "online", 00:24:58.083 "raid_level": "raid1", 00:24:58.083 "superblock": true, 00:24:58.083 "num_base_bdevs": 2, 00:24:58.083 "num_base_bdevs_discovered": 1, 00:24:58.083 "num_base_bdevs_operational": 1, 00:24:58.083 "base_bdevs_list": [ 00:24:58.083 { 00:24:58.083 "name": null, 00:24:58.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.083 "is_configured": false, 00:24:58.083 "data_offset": 2048, 00:24:58.083 "data_size": 63488 00:24:58.083 }, 00:24:58.083 { 00:24:58.083 "name": "BaseBdev2", 00:24:58.083 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:24:58.083 "is_configured": true, 00:24:58.083 "data_offset": 2048, 00:24:58.083 "data_size": 63488 00:24:58.083 } 00:24:58.083 ] 00:24:58.083 }' 00:24:58.083 10:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.083 10:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.649 10:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:58.907 [2024-07-26 10:35:11.598288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:58.907 [2024-07-26 10:35:11.603020] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:24:58.907 [2024-07-26 10:35:11.605020] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:58.907 10:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.841 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.099 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.099 "name": "raid_bdev1", 00:25:00.099 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:00.099 "strip_size_kb": 0, 00:25:00.099 "state": "online", 00:25:00.099 "raid_level": "raid1", 00:25:00.099 "superblock": true, 00:25:00.099 "num_base_bdevs": 2, 00:25:00.099 "num_base_bdevs_discovered": 2, 00:25:00.099 "num_base_bdevs_operational": 2, 00:25:00.099 "process": { 00:25:00.099 "type": "rebuild", 00:25:00.099 "target": "spare", 00:25:00.099 "progress": { 00:25:00.099 "blocks": 24576, 00:25:00.099 "percent": 38 00:25:00.099 } 00:25:00.099 }, 00:25:00.099 "base_bdevs_list": [ 00:25:00.099 { 00:25:00.099 "name": "spare", 00:25:00.099 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:00.099 "is_configured": true, 00:25:00.099 "data_offset": 2048, 00:25:00.099 "data_size": 63488 00:25:00.099 }, 00:25:00.099 { 00:25:00.099 "name": "BaseBdev2", 00:25:00.099 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:00.099 "is_configured": true, 00:25:00.099 "data_offset": 2048, 00:25:00.099 "data_size": 63488 00:25:00.099 } 00:25:00.099 ] 00:25:00.099 }' 00:25:00.099 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.099 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.100 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.100 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.100 10:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:00.358 [2024-07-26 10:35:13.155344] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.358 [2024-07-26 10:35:13.216786] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:00.358 [2024-07-26 10:35:13.216835] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.358 [2024-07-26 10:35:13.216850] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.358 [2024-07-26 10:35:13.216857] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.358 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.614 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.614 "name": "raid_bdev1", 00:25:00.614 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:00.614 "strip_size_kb": 0, 00:25:00.614 "state": "online", 00:25:00.614 "raid_level": "raid1", 00:25:00.614 "superblock": true, 00:25:00.614 "num_base_bdevs": 2, 00:25:00.614 "num_base_bdevs_discovered": 1, 00:25:00.614 "num_base_bdevs_operational": 1, 00:25:00.614 "base_bdevs_list": [ 00:25:00.614 { 00:25:00.614 "name": null, 00:25:00.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.614 "is_configured": false, 00:25:00.614 "data_offset": 2048, 00:25:00.614 "data_size": 63488 00:25:00.614 }, 00:25:00.614 { 00:25:00.614 "name": "BaseBdev2", 00:25:00.614 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:00.614 "is_configured": true, 00:25:00.614 "data_offset": 2048, 00:25:00.614 "data_size": 63488 00:25:00.614 } 00:25:00.614 ] 00:25:00.614 }' 00:25:00.614 10:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.614 10:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.177 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.434 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.434 "name": "raid_bdev1", 00:25:01.434 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:01.434 "strip_size_kb": 0, 00:25:01.434 "state": "online", 00:25:01.434 "raid_level": "raid1", 00:25:01.434 "superblock": true, 00:25:01.434 "num_base_bdevs": 2, 00:25:01.434 "num_base_bdevs_discovered": 1, 00:25:01.434 "num_base_bdevs_operational": 1, 00:25:01.434 "base_bdevs_list": [ 00:25:01.434 { 00:25:01.434 "name": null, 00:25:01.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.434 "is_configured": false, 00:25:01.434 "data_offset": 2048, 00:25:01.434 "data_size": 63488 00:25:01.434 }, 00:25:01.434 { 00:25:01.434 "name": "BaseBdev2", 00:25:01.434 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:01.434 "is_configured": true, 00:25:01.434 "data_offset": 2048, 00:25:01.434 "data_size": 63488 00:25:01.434 } 00:25:01.434 ] 00:25:01.434 }' 00:25:01.434 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:01.434 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:01.434 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:01.691 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.691 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:01.691 [2024-07-26 10:35:14.564556] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:01.691 [2024-07-26 10:35:14.569282] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:25:01.691 [2024-07-26 10:35:14.570634] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:01.691 10:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.059 "name": "raid_bdev1", 00:25:03.059 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:03.059 "strip_size_kb": 0, 00:25:03.059 "state": "online", 00:25:03.059 "raid_level": "raid1", 00:25:03.059 "superblock": true, 00:25:03.059 "num_base_bdevs": 2, 00:25:03.059 "num_base_bdevs_discovered": 2, 00:25:03.059 "num_base_bdevs_operational": 2, 00:25:03.059 "process": { 00:25:03.059 "type": "rebuild", 00:25:03.059 "target": "spare", 00:25:03.059 "progress": { 00:25:03.059 "blocks": 24576, 00:25:03.059 "percent": 38 00:25:03.059 } 00:25:03.059 }, 00:25:03.059 "base_bdevs_list": [ 00:25:03.059 { 00:25:03.059 "name": "spare", 00:25:03.059 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:03.059 "is_configured": true, 00:25:03.059 "data_offset": 2048, 00:25:03.059 "data_size": 63488 00:25:03.059 }, 00:25:03.059 { 00:25:03.059 "name": "BaseBdev2", 00:25:03.059 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:03.059 "is_configured": true, 00:25:03.059 "data_offset": 2048, 00:25:03.059 "data_size": 63488 00:25:03.059 } 00:25:03.059 ] 00:25:03.059 }' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:03.059 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=759 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.059 10:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.316 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.316 "name": "raid_bdev1", 00:25:03.316 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:03.316 "strip_size_kb": 0, 00:25:03.316 "state": "online", 00:25:03.316 "raid_level": "raid1", 00:25:03.316 "superblock": true, 00:25:03.316 "num_base_bdevs": 2, 00:25:03.316 "num_base_bdevs_discovered": 2, 00:25:03.316 "num_base_bdevs_operational": 2, 00:25:03.316 "process": { 00:25:03.316 "type": "rebuild", 00:25:03.316 "target": "spare", 00:25:03.316 "progress": { 00:25:03.316 "blocks": 30720, 00:25:03.316 "percent": 48 00:25:03.316 } 00:25:03.316 }, 00:25:03.316 "base_bdevs_list": [ 00:25:03.316 { 00:25:03.316 "name": "spare", 00:25:03.316 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:03.316 "is_configured": true, 00:25:03.316 "data_offset": 2048, 00:25:03.316 "data_size": 63488 00:25:03.316 }, 00:25:03.316 { 00:25:03.316 "name": "BaseBdev2", 00:25:03.316 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:03.316 "is_configured": true, 00:25:03.316 "data_offset": 2048, 00:25:03.316 "data_size": 63488 00:25:03.316 } 00:25:03.316 ] 00:25:03.316 }' 00:25:03.316 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.316 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.316 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.573 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.573 10:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:04.503 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.504 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.504 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.764 "name": "raid_bdev1", 00:25:04.764 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:04.764 "strip_size_kb": 0, 00:25:04.764 "state": "online", 00:25:04.764 "raid_level": "raid1", 00:25:04.764 "superblock": true, 00:25:04.764 "num_base_bdevs": 2, 00:25:04.764 "num_base_bdevs_discovered": 2, 00:25:04.764 "num_base_bdevs_operational": 2, 00:25:04.764 "process": { 00:25:04.764 "type": "rebuild", 00:25:04.764 "target": "spare", 00:25:04.764 "progress": { 00:25:04.764 "blocks": 57344, 00:25:04.764 "percent": 90 00:25:04.764 } 00:25:04.764 }, 00:25:04.764 "base_bdevs_list": [ 00:25:04.764 { 00:25:04.764 "name": "spare", 00:25:04.764 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:04.764 "is_configured": true, 00:25:04.764 "data_offset": 2048, 00:25:04.764 "data_size": 63488 00:25:04.764 }, 00:25:04.764 { 00:25:04.764 "name": "BaseBdev2", 00:25:04.764 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:04.764 "is_configured": true, 00:25:04.764 "data_offset": 2048, 00:25:04.764 "data_size": 63488 00:25:04.764 } 00:25:04.764 ] 00:25:04.764 }' 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.764 10:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:05.021 [2024-07-26 10:35:17.693189] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:05.021 [2024-07-26 10:35:17.693243] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:05.021 [2024-07-26 10:35:17.693320] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.952 "name": "raid_bdev1", 00:25:05.952 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:05.952 "strip_size_kb": 0, 00:25:05.952 "state": "online", 00:25:05.952 "raid_level": "raid1", 00:25:05.952 "superblock": true, 00:25:05.952 "num_base_bdevs": 2, 00:25:05.952 "num_base_bdevs_discovered": 2, 00:25:05.952 "num_base_bdevs_operational": 2, 00:25:05.952 "base_bdevs_list": [ 00:25:05.952 { 00:25:05.952 "name": "spare", 00:25:05.952 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:05.952 "is_configured": true, 00:25:05.952 "data_offset": 2048, 00:25:05.952 "data_size": 63488 00:25:05.952 }, 00:25:05.952 { 00:25:05.952 "name": "BaseBdev2", 00:25:05.952 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:05.952 "is_configured": true, 00:25:05.952 "data_offset": 2048, 00:25:05.952 "data_size": 63488 00:25:05.952 } 00:25:05.952 ] 00:25:05.952 }' 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:05.952 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.210 10:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:06.467 "name": "raid_bdev1", 00:25:06.467 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:06.467 "strip_size_kb": 0, 00:25:06.467 "state": "online", 00:25:06.467 "raid_level": "raid1", 00:25:06.467 "superblock": true, 00:25:06.467 "num_base_bdevs": 2, 00:25:06.467 "num_base_bdevs_discovered": 2, 00:25:06.467 "num_base_bdevs_operational": 2, 00:25:06.467 "base_bdevs_list": [ 00:25:06.467 { 00:25:06.467 "name": "spare", 00:25:06.467 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:06.467 "is_configured": true, 00:25:06.467 "data_offset": 2048, 00:25:06.467 "data_size": 63488 00:25:06.467 }, 00:25:06.467 { 00:25:06.467 "name": "BaseBdev2", 00:25:06.467 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:06.467 "is_configured": true, 00:25:06.467 "data_offset": 2048, 00:25:06.467 "data_size": 63488 00:25:06.467 } 00:25:06.467 ] 00:25:06.467 }' 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.467 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.724 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.724 "name": "raid_bdev1", 00:25:06.724 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:06.724 "strip_size_kb": 0, 00:25:06.724 "state": "online", 00:25:06.724 "raid_level": "raid1", 00:25:06.724 "superblock": true, 00:25:06.724 "num_base_bdevs": 2, 00:25:06.724 "num_base_bdevs_discovered": 2, 00:25:06.724 "num_base_bdevs_operational": 2, 00:25:06.724 "base_bdevs_list": [ 00:25:06.724 { 00:25:06.724 "name": "spare", 00:25:06.724 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:06.724 "is_configured": true, 00:25:06.724 "data_offset": 2048, 00:25:06.724 "data_size": 63488 00:25:06.724 }, 00:25:06.724 { 00:25:06.724 "name": "BaseBdev2", 00:25:06.724 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:06.724 "is_configured": true, 00:25:06.724 "data_offset": 2048, 00:25:06.724 "data_size": 63488 00:25:06.724 } 00:25:06.724 ] 00:25:06.724 }' 00:25:06.724 10:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.724 10:35:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.288 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:07.548 [2024-07-26 10:35:20.200236] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.548 [2024-07-26 10:35:20.200263] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:07.548 [2024-07-26 10:35:20.200323] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.548 [2024-07-26 10:35:20.200377] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.548 [2024-07-26 10:35:20.200388] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26a4250 name raid_bdev1, state offline 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:07.549 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:07.810 /dev/nbd0 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:07.810 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:07.810 1+0 records in 00:25:07.810 1+0 records out 00:25:07.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231061 s, 17.7 MB/s 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:07.811 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:08.069 /dev/nbd1 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:08.069 1+0 records in 00:25:08.069 1+0 records out 00:25:08.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298963 s, 13.7 MB/s 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:08.069 10:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:08.328 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:08.586 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:08.844 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:09.102 10:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:09.102 [2024-07-26 10:35:21.985150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:09.102 [2024-07-26 10:35:21.985190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.102 [2024-07-26 10:35:21.985208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26a1b10 00:25:09.102 [2024-07-26 10:35:21.985220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.102 [2024-07-26 10:35:21.986785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.102 [2024-07-26 10:35:21.986812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:09.102 [2024-07-26 10:35:21.986881] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:09.102 [2024-07-26 10:35:21.986905] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:09.102 [2024-07-26 10:35:21.986998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:09.102 spare 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.360 [2024-07-26 10:35:22.087304] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x26a4d60 00:25:09.360 [2024-07-26 10:35:22.087316] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:09.360 [2024-07-26 10:35:22.087481] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:25:09.360 [2024-07-26 10:35:22.087611] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26a4d60 00:25:09.360 [2024-07-26 10:35:22.087620] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26a4d60 00:25:09.360 [2024-07-26 10:35:22.087711] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.360 "name": "raid_bdev1", 00:25:09.360 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:09.360 "strip_size_kb": 0, 00:25:09.360 "state": "online", 00:25:09.360 "raid_level": "raid1", 00:25:09.360 "superblock": true, 00:25:09.360 "num_base_bdevs": 2, 00:25:09.360 "num_base_bdevs_discovered": 2, 00:25:09.360 "num_base_bdevs_operational": 2, 00:25:09.360 "base_bdevs_list": [ 00:25:09.360 { 00:25:09.360 "name": "spare", 00:25:09.360 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:09.360 "is_configured": true, 00:25:09.360 "data_offset": 2048, 00:25:09.360 "data_size": 63488 00:25:09.360 }, 00:25:09.360 { 00:25:09.360 "name": "BaseBdev2", 00:25:09.360 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:09.360 "is_configured": true, 00:25:09.360 "data_offset": 2048, 00:25:09.360 "data_size": 63488 00:25:09.360 } 00:25:09.360 ] 00:25:09.360 }' 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.360 10:35:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.927 10:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.185 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:10.185 "name": "raid_bdev1", 00:25:10.185 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:10.185 "strip_size_kb": 0, 00:25:10.185 "state": "online", 00:25:10.185 "raid_level": "raid1", 00:25:10.185 "superblock": true, 00:25:10.185 "num_base_bdevs": 2, 00:25:10.185 "num_base_bdevs_discovered": 2, 00:25:10.185 "num_base_bdevs_operational": 2, 00:25:10.185 "base_bdevs_list": [ 00:25:10.185 { 00:25:10.185 "name": "spare", 00:25:10.185 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:10.185 "is_configured": true, 00:25:10.185 "data_offset": 2048, 00:25:10.185 "data_size": 63488 00:25:10.185 }, 00:25:10.185 { 00:25:10.185 "name": "BaseBdev2", 00:25:10.185 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:10.185 "is_configured": true, 00:25:10.185 "data_offset": 2048, 00:25:10.185 "data_size": 63488 00:25:10.185 } 00:25:10.185 ] 00:25:10.185 }' 00:25:10.185 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:10.185 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:10.185 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:10.443 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:10.443 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.443 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:10.700 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:10.701 [2024-07-26 10:35:23.569427] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.701 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.957 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.957 "name": "raid_bdev1", 00:25:10.957 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:10.957 "strip_size_kb": 0, 00:25:10.957 "state": "online", 00:25:10.957 "raid_level": "raid1", 00:25:10.957 "superblock": true, 00:25:10.958 "num_base_bdevs": 2, 00:25:10.958 "num_base_bdevs_discovered": 1, 00:25:10.958 "num_base_bdevs_operational": 1, 00:25:10.958 "base_bdevs_list": [ 00:25:10.958 { 00:25:10.958 "name": null, 00:25:10.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.958 "is_configured": false, 00:25:10.958 "data_offset": 2048, 00:25:10.958 "data_size": 63488 00:25:10.958 }, 00:25:10.958 { 00:25:10.958 "name": "BaseBdev2", 00:25:10.958 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:10.958 "is_configured": true, 00:25:10.958 "data_offset": 2048, 00:25:10.958 "data_size": 63488 00:25:10.958 } 00:25:10.958 ] 00:25:10.958 }' 00:25:10.958 10:35:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.958 10:35:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.522 10:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:11.780 [2024-07-26 10:35:24.584127] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.780 [2024-07-26 10:35:24.584273] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:11.780 [2024-07-26 10:35:24.584288] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:11.780 [2024-07-26 10:35:24.584315] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.780 [2024-07-26 10:35:24.588830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:25:11.780 [2024-07-26 10:35:24.590758] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:11.780 10:35:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.715 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.972 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:12.973 "name": "raid_bdev1", 00:25:12.973 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:12.973 "strip_size_kb": 0, 00:25:12.973 "state": "online", 00:25:12.973 "raid_level": "raid1", 00:25:12.973 "superblock": true, 00:25:12.973 "num_base_bdevs": 2, 00:25:12.973 "num_base_bdevs_discovered": 2, 00:25:12.973 "num_base_bdevs_operational": 2, 00:25:12.973 "process": { 00:25:12.973 "type": "rebuild", 00:25:12.973 "target": "spare", 00:25:12.973 "progress": { 00:25:12.973 "blocks": 24576, 00:25:12.973 "percent": 38 00:25:12.973 } 00:25:12.973 }, 00:25:12.973 "base_bdevs_list": [ 00:25:12.973 { 00:25:12.973 "name": "spare", 00:25:12.973 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:12.973 "is_configured": true, 00:25:12.973 "data_offset": 2048, 00:25:12.973 "data_size": 63488 00:25:12.973 }, 00:25:12.973 { 00:25:12.973 "name": "BaseBdev2", 00:25:12.973 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:12.973 "is_configured": true, 00:25:12.973 "data_offset": 2048, 00:25:12.973 "data_size": 63488 00:25:12.973 } 00:25:12.973 ] 00:25:12.973 }' 00:25:12.973 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:13.230 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.230 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:13.230 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.230 10:35:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:13.230 [2024-07-26 10:35:26.129843] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.488 [2024-07-26 10:35:26.202439] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:13.488 [2024-07-26 10:35:26.202481] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.488 [2024-07-26 10:35:26.202495] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.488 [2024-07-26 10:35:26.202502] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.488 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.746 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.746 "name": "raid_bdev1", 00:25:13.746 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:13.746 "strip_size_kb": 0, 00:25:13.746 "state": "online", 00:25:13.746 "raid_level": "raid1", 00:25:13.746 "superblock": true, 00:25:13.746 "num_base_bdevs": 2, 00:25:13.746 "num_base_bdevs_discovered": 1, 00:25:13.746 "num_base_bdevs_operational": 1, 00:25:13.746 "base_bdevs_list": [ 00:25:13.746 { 00:25:13.746 "name": null, 00:25:13.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.746 "is_configured": false, 00:25:13.746 "data_offset": 2048, 00:25:13.746 "data_size": 63488 00:25:13.746 }, 00:25:13.746 { 00:25:13.746 "name": "BaseBdev2", 00:25:13.746 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:13.746 "is_configured": true, 00:25:13.746 "data_offset": 2048, 00:25:13.746 "data_size": 63488 00:25:13.746 } 00:25:13.746 ] 00:25:13.746 }' 00:25:13.746 10:35:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.746 10:35:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.311 10:35:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:14.568 [2024-07-26 10:35:27.281762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:14.568 [2024-07-26 10:35:27.281811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.568 [2024-07-26 10:35:27.281831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f5fd0 00:25:14.568 [2024-07-26 10:35:27.281842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.568 [2024-07-26 10:35:27.282203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.568 [2024-07-26 10:35:27.282219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:14.568 [2024-07-26 10:35:27.282292] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:14.568 [2024-07-26 10:35:27.282303] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:14.568 [2024-07-26 10:35:27.282319] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:14.568 [2024-07-26 10:35:27.282336] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.568 [2024-07-26 10:35:27.286863] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a0900 00:25:14.568 spare 00:25:14.568 [2024-07-26 10:35:27.288204] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:14.568 10:35:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.501 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.758 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.758 "name": "raid_bdev1", 00:25:15.758 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:15.758 "strip_size_kb": 0, 00:25:15.758 "state": "online", 00:25:15.758 "raid_level": "raid1", 00:25:15.758 "superblock": true, 00:25:15.758 "num_base_bdevs": 2, 00:25:15.758 "num_base_bdevs_discovered": 2, 00:25:15.759 "num_base_bdevs_operational": 2, 00:25:15.759 "process": { 00:25:15.759 "type": "rebuild", 00:25:15.759 "target": "spare", 00:25:15.759 "progress": { 00:25:15.759 "blocks": 24576, 00:25:15.759 "percent": 38 00:25:15.759 } 00:25:15.759 }, 00:25:15.759 "base_bdevs_list": [ 00:25:15.759 { 00:25:15.759 "name": "spare", 00:25:15.759 "uuid": "cdb5beff-19a0-5327-89d7-3e69ac5a6fec", 00:25:15.759 "is_configured": true, 00:25:15.759 "data_offset": 2048, 00:25:15.759 "data_size": 63488 00:25:15.759 }, 00:25:15.759 { 00:25:15.759 "name": "BaseBdev2", 00:25:15.759 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:15.759 "is_configured": true, 00:25:15.759 "data_offset": 2048, 00:25:15.759 "data_size": 63488 00:25:15.759 } 00:25:15.759 ] 00:25:15.759 }' 00:25:15.759 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:15.759 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.759 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:15.759 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.759 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:16.016 [2024-07-26 10:35:28.835884] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.016 [2024-07-26 10:35:28.899906] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.016 [2024-07-26 10:35:28.899953] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.016 [2024-07-26 10:35:28.899967] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.016 [2024-07-26 10:35:28.899974] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.274 10:35:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.274 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.274 "name": "raid_bdev1", 00:25:16.274 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:16.274 "strip_size_kb": 0, 00:25:16.274 "state": "online", 00:25:16.274 "raid_level": "raid1", 00:25:16.274 "superblock": true, 00:25:16.274 "num_base_bdevs": 2, 00:25:16.274 "num_base_bdevs_discovered": 1, 00:25:16.274 "num_base_bdevs_operational": 1, 00:25:16.274 "base_bdevs_list": [ 00:25:16.274 { 00:25:16.274 "name": null, 00:25:16.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.274 "is_configured": false, 00:25:16.274 "data_offset": 2048, 00:25:16.274 "data_size": 63488 00:25:16.274 }, 00:25:16.274 { 00:25:16.274 "name": "BaseBdev2", 00:25:16.274 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:16.274 "is_configured": true, 00:25:16.274 "data_offset": 2048, 00:25:16.274 "data_size": 63488 00:25:16.274 } 00:25:16.274 ] 00:25:16.274 }' 00:25:16.274 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.274 10:35:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.839 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.098 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.098 "name": "raid_bdev1", 00:25:17.098 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:17.098 "strip_size_kb": 0, 00:25:17.098 "state": "online", 00:25:17.098 "raid_level": "raid1", 00:25:17.098 "superblock": true, 00:25:17.098 "num_base_bdevs": 2, 00:25:17.098 "num_base_bdevs_discovered": 1, 00:25:17.098 "num_base_bdevs_operational": 1, 00:25:17.098 "base_bdevs_list": [ 00:25:17.098 { 00:25:17.098 "name": null, 00:25:17.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.098 "is_configured": false, 00:25:17.098 "data_offset": 2048, 00:25:17.098 "data_size": 63488 00:25:17.098 }, 00:25:17.098 { 00:25:17.098 "name": "BaseBdev2", 00:25:17.098 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:17.098 "is_configured": true, 00:25:17.098 "data_offset": 2048, 00:25:17.098 "data_size": 63488 00:25:17.098 } 00:25:17.098 ] 00:25:17.098 }' 00:25:17.098 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.098 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:17.098 10:35:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.356 10:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:17.356 10:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:17.356 10:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:17.615 [2024-07-26 10:35:30.444189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:17.615 [2024-07-26 10:35:30.444236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.615 [2024-07-26 10:35:30.444255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f96b0 00:25:17.615 [2024-07-26 10:35:30.444272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.615 [2024-07-26 10:35:30.444592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.615 [2024-07-26 10:35:30.444609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:17.615 [2024-07-26 10:35:30.444667] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:17.615 [2024-07-26 10:35:30.444678] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:17.615 [2024-07-26 10:35:30.444687] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:17.615 BaseBdev1 00:25:17.615 10:35:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:18.989 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:18.989 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:18.989 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:18.989 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.990 "name": "raid_bdev1", 00:25:18.990 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:18.990 "strip_size_kb": 0, 00:25:18.990 "state": "online", 00:25:18.990 "raid_level": "raid1", 00:25:18.990 "superblock": true, 00:25:18.990 "num_base_bdevs": 2, 00:25:18.990 "num_base_bdevs_discovered": 1, 00:25:18.990 "num_base_bdevs_operational": 1, 00:25:18.990 "base_bdevs_list": [ 00:25:18.990 { 00:25:18.990 "name": null, 00:25:18.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.990 "is_configured": false, 00:25:18.990 "data_offset": 2048, 00:25:18.990 "data_size": 63488 00:25:18.990 }, 00:25:18.990 { 00:25:18.990 "name": "BaseBdev2", 00:25:18.990 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:18.990 "is_configured": true, 00:25:18.990 "data_offset": 2048, 00:25:18.990 "data_size": 63488 00:25:18.990 } 00:25:18.990 ] 00:25:18.990 }' 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.990 10:35:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.554 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.811 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.811 "name": "raid_bdev1", 00:25:19.811 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:19.811 "strip_size_kb": 0, 00:25:19.811 "state": "online", 00:25:19.811 "raid_level": "raid1", 00:25:19.811 "superblock": true, 00:25:19.811 "num_base_bdevs": 2, 00:25:19.811 "num_base_bdevs_discovered": 1, 00:25:19.811 "num_base_bdevs_operational": 1, 00:25:19.811 "base_bdevs_list": [ 00:25:19.811 { 00:25:19.811 "name": null, 00:25:19.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.811 "is_configured": false, 00:25:19.811 "data_offset": 2048, 00:25:19.811 "data_size": 63488 00:25:19.811 }, 00:25:19.811 { 00:25:19.811 "name": "BaseBdev2", 00:25:19.811 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:19.811 "is_configured": true, 00:25:19.811 "data_offset": 2048, 00:25:19.811 "data_size": 63488 00:25:19.811 } 00:25:19.811 ] 00:25:19.811 }' 00:25:19.811 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.811 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:19.811 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.811 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:19.812 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:20.098 [2024-07-26 10:35:32.818477] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.098 [2024-07-26 10:35:32.818598] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:20.098 [2024-07-26 10:35:32.818612] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:20.098 request: 00:25:20.098 { 00:25:20.098 "base_bdev": "BaseBdev1", 00:25:20.098 "raid_bdev": "raid_bdev1", 00:25:20.098 "method": "bdev_raid_add_base_bdev", 00:25:20.098 "req_id": 1 00:25:20.098 } 00:25:20.098 Got JSON-RPC error response 00:25:20.098 response: 00:25:20.098 { 00:25:20.098 "code": -22, 00:25:20.098 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:20.098 } 00:25:20.098 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:25:20.098 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:20.098 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:20.098 10:35:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:20.098 10:35:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.031 10:35:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.288 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.288 "name": "raid_bdev1", 00:25:21.288 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:21.288 "strip_size_kb": 0, 00:25:21.288 "state": "online", 00:25:21.288 "raid_level": "raid1", 00:25:21.288 "superblock": true, 00:25:21.288 "num_base_bdevs": 2, 00:25:21.288 "num_base_bdevs_discovered": 1, 00:25:21.288 "num_base_bdevs_operational": 1, 00:25:21.288 "base_bdevs_list": [ 00:25:21.288 { 00:25:21.288 "name": null, 00:25:21.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.288 "is_configured": false, 00:25:21.288 "data_offset": 2048, 00:25:21.288 "data_size": 63488 00:25:21.288 }, 00:25:21.288 { 00:25:21.288 "name": "BaseBdev2", 00:25:21.288 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:21.288 "is_configured": true, 00:25:21.288 "data_offset": 2048, 00:25:21.288 "data_size": 63488 00:25:21.288 } 00:25:21.288 ] 00:25:21.288 }' 00:25:21.288 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.288 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.853 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.110 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:22.110 "name": "raid_bdev1", 00:25:22.110 "uuid": "2d9c5df3-b85d-4b34-a4ce-2ee7ac39225c", 00:25:22.110 "strip_size_kb": 0, 00:25:22.111 "state": "online", 00:25:22.111 "raid_level": "raid1", 00:25:22.111 "superblock": true, 00:25:22.111 "num_base_bdevs": 2, 00:25:22.111 "num_base_bdevs_discovered": 1, 00:25:22.111 "num_base_bdevs_operational": 1, 00:25:22.111 "base_bdevs_list": [ 00:25:22.111 { 00:25:22.111 "name": null, 00:25:22.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.111 "is_configured": false, 00:25:22.111 "data_offset": 2048, 00:25:22.111 "data_size": 63488 00:25:22.111 }, 00:25:22.111 { 00:25:22.111 "name": "BaseBdev2", 00:25:22.111 "uuid": "3ffb67ec-4d95-5be8-9488-fa63eb548a61", 00:25:22.111 "is_configured": true, 00:25:22.111 "data_offset": 2048, 00:25:22.111 "data_size": 63488 00:25:22.111 } 00:25:22.111 ] 00:25:22.111 }' 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 3476319 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3476319 ']' 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 3476319 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.111 10:35:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3476319 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3476319' 00:25:22.368 killing process with pid 3476319 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 3476319 00:25:22.368 Received shutdown signal, test time was about 60.000000 seconds 00:25:22.368 00:25:22.368 Latency(us) 00:25:22.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.368 =================================================================================================================== 00:25:22.368 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:22.368 [2024-07-26 10:35:35.024865] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.368 [2024-07-26 10:35:35.024951] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.368 [2024-07-26 10:35:35.024994] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.368 [2024-07-26 10:35:35.025005] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26a4d60 name raid_bdev1, state offline 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 3476319 00:25:22.368 [2024-07-26 10:35:35.048954] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:25:22.368 00:25:22.368 real 0m34.192s 00:25:22.368 user 0m49.299s 00:25:22.368 sys 0m6.469s 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.368 10:35:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.368 ************************************ 00:25:22.368 END TEST raid_rebuild_test_sb 00:25:22.368 ************************************ 00:25:22.368 10:35:35 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:25:22.368 10:35:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:22.368 10:35:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.368 10:35:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 ************************************ 00:25:22.626 START TEST raid_rebuild_test_io 00:25:22.626 ************************************ 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3482574 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3482574 /var/tmp/spdk-raid.sock 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 3482574 ']' 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:22.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:22.626 10:35:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:22.626 [2024-07-26 10:35:35.368094] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:25:22.627 [2024-07-26 10:35:35.368163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482574 ] 00:25:22.627 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:22.627 Zero copy mechanism will not be used. 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:22.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:22.627 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:22.627 [2024-07-26 10:35:35.501369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.884 [2024-07-26 10:35:35.546073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.884 [2024-07-26 10:35:35.608720] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.884 [2024-07-26 10:35:35.608756] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.450 10:35:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.450 10:35:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:25:23.450 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:23.450 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:23.708 BaseBdev1_malloc 00:25:23.708 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.966 [2024-07-26 10:35:36.711429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.966 [2024-07-26 10:35:36.711473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.966 [2024-07-26 10:35:36.711495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfad370 00:25:23.966 [2024-07-26 10:35:36.711507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.966 [2024-07-26 10:35:36.713017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.966 [2024-07-26 10:35:36.713042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.966 BaseBdev1 00:25:23.966 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:23.966 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:24.223 BaseBdev2_malloc 00:25:24.223 10:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:24.481 [2024-07-26 10:35:37.157080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:24.481 [2024-07-26 10:35:37.157119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.481 [2024-07-26 10:35:37.157137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf690d0 00:25:24.481 [2024-07-26 10:35:37.157154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.481 [2024-07-26 10:35:37.158580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.481 [2024-07-26 10:35:37.158607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:24.481 BaseBdev2 00:25:24.481 10:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.738 spare_malloc 00:25:24.738 10:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.738 spare_delay 00:25:24.738 10:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.996 [2024-07-26 10:35:37.843304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.996 [2024-07-26 10:35:37.843346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.996 [2024-07-26 10:35:37.843366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf57070 00:25:24.996 [2024-07-26 10:35:37.843378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.996 [2024-07-26 10:35:37.844746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.996 [2024-07-26 10:35:37.844772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.996 spare 00:25:24.996 10:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:25.254 [2024-07-26 10:35:38.055870] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.254 [2024-07-26 10:35:38.056963] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.254 [2024-07-26 10:35:38.057035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf58250 00:25:25.254 [2024-07-26 10:35:38.057045] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:25.254 [2024-07-26 10:35:38.057232] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe124f0 00:25:25.254 [2024-07-26 10:35:38.057352] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf58250 00:25:25.254 [2024-07-26 10:35:38.057361] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf58250 00:25:25.254 [2024-07-26 10:35:38.057458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.254 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.511 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.511 "name": "raid_bdev1", 00:25:25.511 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:25.511 "strip_size_kb": 0, 00:25:25.511 "state": "online", 00:25:25.511 "raid_level": "raid1", 00:25:25.511 "superblock": false, 00:25:25.511 "num_base_bdevs": 2, 00:25:25.511 "num_base_bdevs_discovered": 2, 00:25:25.511 "num_base_bdevs_operational": 2, 00:25:25.511 "base_bdevs_list": [ 00:25:25.511 { 00:25:25.511 "name": "BaseBdev1", 00:25:25.511 "uuid": "2cb9da04-9e32-5628-8c6b-ce5ea111fac4", 00:25:25.511 "is_configured": true, 00:25:25.511 "data_offset": 0, 00:25:25.511 "data_size": 65536 00:25:25.511 }, 00:25:25.511 { 00:25:25.511 "name": "BaseBdev2", 00:25:25.511 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:25.511 "is_configured": true, 00:25:25.511 "data_offset": 0, 00:25:25.511 "data_size": 65536 00:25:25.511 } 00:25:25.511 ] 00:25:25.511 }' 00:25:25.511 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.511 10:35:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:26.075 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:26.075 10:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:26.332 [2024-07-26 10:35:39.062786] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:26.332 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:25:26.332 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.332 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.590 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:25:26.590 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:26.590 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:26.590 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:26.590 [2024-07-26 10:35:39.417564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf55e30 00:25:26.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:26.590 Zero copy mechanism will not be used. 00:25:26.590 Running I/O for 60 seconds... 00:25:26.849 [2024-07-26 10:35:39.525359] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:26.849 [2024-07-26 10:35:39.532860] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf55e30 00:25:26.849 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:26.849 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.849 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.849 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.850 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.108 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.108 "name": "raid_bdev1", 00:25:27.108 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:27.108 "strip_size_kb": 0, 00:25:27.108 "state": "online", 00:25:27.108 "raid_level": "raid1", 00:25:27.108 "superblock": false, 00:25:27.108 "num_base_bdevs": 2, 00:25:27.108 "num_base_bdevs_discovered": 1, 00:25:27.108 "num_base_bdevs_operational": 1, 00:25:27.108 "base_bdevs_list": [ 00:25:27.108 { 00:25:27.108 "name": null, 00:25:27.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.108 "is_configured": false, 00:25:27.108 "data_offset": 0, 00:25:27.108 "data_size": 65536 00:25:27.108 }, 00:25:27.108 { 00:25:27.108 "name": "BaseBdev2", 00:25:27.108 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:27.108 "is_configured": true, 00:25:27.108 "data_offset": 0, 00:25:27.108 "data_size": 65536 00:25:27.108 } 00:25:27.108 ] 00:25:27.108 }' 00:25:27.108 10:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.108 10:35:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:27.674 10:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:27.931 [2024-07-26 10:35:40.605746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.931 [2024-07-26 10:35:40.652003] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf5acd0 00:25:27.931 10:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:27.931 [2024-07-26 10:35:40.654155] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.931 [2024-07-26 10:35:40.770519] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:27.931 [2024-07-26 10:35:40.770871] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:28.189 [2024-07-26 10:35:40.989750] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:28.189 [2024-07-26 10:35:40.989898] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:28.447 [2024-07-26 10:35:41.333071] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:28.704 [2024-07-26 10:35:41.573394] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:28.704 [2024-07-26 10:35:41.573544] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.962 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.220 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:29.220 "name": "raid_bdev1", 00:25:29.220 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:29.220 "strip_size_kb": 0, 00:25:29.220 "state": "online", 00:25:29.220 "raid_level": "raid1", 00:25:29.220 "superblock": false, 00:25:29.220 "num_base_bdevs": 2, 00:25:29.220 "num_base_bdevs_discovered": 2, 00:25:29.220 "num_base_bdevs_operational": 2, 00:25:29.220 "process": { 00:25:29.220 "type": "rebuild", 00:25:29.220 "target": "spare", 00:25:29.220 "progress": { 00:25:29.220 "blocks": 12288, 00:25:29.220 "percent": 18 00:25:29.220 } 00:25:29.220 }, 00:25:29.220 "base_bdevs_list": [ 00:25:29.220 { 00:25:29.220 "name": "spare", 00:25:29.221 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:29.221 "is_configured": true, 00:25:29.221 "data_offset": 0, 00:25:29.221 "data_size": 65536 00:25:29.221 }, 00:25:29.221 { 00:25:29.221 "name": "BaseBdev2", 00:25:29.221 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:29.221 "is_configured": true, 00:25:29.221 "data_offset": 0, 00:25:29.221 "data_size": 65536 00:25:29.221 } 00:25:29.221 ] 00:25:29.221 }' 00:25:29.221 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:29.221 [2024-07-26 10:35:41.932866] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:29.221 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.221 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:29.221 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.221 10:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:29.221 [2024-07-26 10:35:42.041995] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:29.221 [2024-07-26 10:35:42.042190] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:29.479 [2024-07-26 10:35:42.128161] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.479 [2024-07-26 10:35:42.159410] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:29.479 [2024-07-26 10:35:42.267707] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.479 [2024-07-26 10:35:42.283628] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.479 [2024-07-26 10:35:42.283651] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.479 [2024-07-26 10:35:42.283660] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:29.479 [2024-07-26 10:35:42.311309] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf55e30 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.479 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.737 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.737 "name": "raid_bdev1", 00:25:29.737 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:29.737 "strip_size_kb": 0, 00:25:29.737 "state": "online", 00:25:29.737 "raid_level": "raid1", 00:25:29.737 "superblock": false, 00:25:29.737 "num_base_bdevs": 2, 00:25:29.737 "num_base_bdevs_discovered": 1, 00:25:29.737 "num_base_bdevs_operational": 1, 00:25:29.737 "base_bdevs_list": [ 00:25:29.737 { 00:25:29.737 "name": null, 00:25:29.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.737 "is_configured": false, 00:25:29.737 "data_offset": 0, 00:25:29.737 "data_size": 65536 00:25:29.737 }, 00:25:29.737 { 00:25:29.737 "name": "BaseBdev2", 00:25:29.737 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:29.737 "is_configured": true, 00:25:29.737 "data_offset": 0, 00:25:29.737 "data_size": 65536 00:25:29.737 } 00:25:29.737 ] 00:25:29.737 }' 00:25:29.737 10:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.737 10:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.305 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.577 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:30.577 "name": "raid_bdev1", 00:25:30.577 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:30.577 "strip_size_kb": 0, 00:25:30.577 "state": "online", 00:25:30.577 "raid_level": "raid1", 00:25:30.577 "superblock": false, 00:25:30.577 "num_base_bdevs": 2, 00:25:30.577 "num_base_bdevs_discovered": 1, 00:25:30.577 "num_base_bdevs_operational": 1, 00:25:30.577 "base_bdevs_list": [ 00:25:30.577 { 00:25:30.577 "name": null, 00:25:30.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.577 "is_configured": false, 00:25:30.577 "data_offset": 0, 00:25:30.577 "data_size": 65536 00:25:30.577 }, 00:25:30.577 { 00:25:30.577 "name": "BaseBdev2", 00:25:30.577 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:30.577 "is_configured": true, 00:25:30.577 "data_offset": 0, 00:25:30.577 "data_size": 65536 00:25:30.577 } 00:25:30.577 ] 00:25:30.577 }' 00:25:30.577 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:30.577 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:30.577 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:30.835 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:30.835 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:30.835 [2024-07-26 10:35:43.710120] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:31.093 [2024-07-26 10:35:43.741549] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfab730 00:25:31.093 [2024-07-26 10:35:43.742910] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:31.093 10:35:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:31.093 [2024-07-26 10:35:43.870525] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:31.093 [2024-07-26 10:35:43.878228] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:31.350 [2024-07-26 10:35:44.096205] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:31.350 [2024-07-26 10:35:44.096352] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:31.607 [2024-07-26 10:35:44.431961] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:31.607 [2024-07-26 10:35:44.432313] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:31.899 [2024-07-26 10:35:44.642510] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:31.899 [2024-07-26 10:35:44.642717] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.899 10:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.156 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:32.156 "name": "raid_bdev1", 00:25:32.156 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:32.156 "strip_size_kb": 0, 00:25:32.156 "state": "online", 00:25:32.156 "raid_level": "raid1", 00:25:32.156 "superblock": false, 00:25:32.156 "num_base_bdevs": 2, 00:25:32.156 "num_base_bdevs_discovered": 2, 00:25:32.156 "num_base_bdevs_operational": 2, 00:25:32.156 "process": { 00:25:32.156 "type": "rebuild", 00:25:32.156 "target": "spare", 00:25:32.156 "progress": { 00:25:32.156 "blocks": 12288, 00:25:32.156 "percent": 18 00:25:32.156 } 00:25:32.156 }, 00:25:32.156 "base_bdevs_list": [ 00:25:32.156 { 00:25:32.156 "name": "spare", 00:25:32.156 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:32.156 "is_configured": true, 00:25:32.156 "data_offset": 0, 00:25:32.156 "data_size": 65536 00:25:32.156 }, 00:25:32.156 { 00:25:32.156 "name": "BaseBdev2", 00:25:32.156 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:32.156 "is_configured": true, 00:25:32.156 "data_offset": 0, 00:25:32.156 "data_size": 65536 00:25:32.156 } 00:25:32.156 ] 00:25:32.156 }' 00:25:32.156 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:32.156 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.157 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=789 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.414 [2024-07-26 10:35:45.111129] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:32.414 "name": "raid_bdev1", 00:25:32.414 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:32.414 "strip_size_kb": 0, 00:25:32.414 "state": "online", 00:25:32.414 "raid_level": "raid1", 00:25:32.414 "superblock": false, 00:25:32.414 "num_base_bdevs": 2, 00:25:32.414 "num_base_bdevs_discovered": 2, 00:25:32.414 "num_base_bdevs_operational": 2, 00:25:32.414 "process": { 00:25:32.414 "type": "rebuild", 00:25:32.414 "target": "spare", 00:25:32.414 "progress": { 00:25:32.414 "blocks": 16384, 00:25:32.414 "percent": 25 00:25:32.414 } 00:25:32.414 }, 00:25:32.414 "base_bdevs_list": [ 00:25:32.414 { 00:25:32.414 "name": "spare", 00:25:32.414 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:32.414 "is_configured": true, 00:25:32.414 "data_offset": 0, 00:25:32.414 "data_size": 65536 00:25:32.414 }, 00:25:32.414 { 00:25:32.414 "name": "BaseBdev2", 00:25:32.414 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:32.414 "is_configured": true, 00:25:32.414 "data_offset": 0, 00:25:32.414 "data_size": 65536 00:25:32.414 } 00:25:32.414 ] 00:25:32.414 }' 00:25:32.414 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:32.672 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.672 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:32.672 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.672 10:35:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:32.672 [2024-07-26 10:35:45.482085] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:32.928 [2024-07-26 10:35:45.614657] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:32.929 [2024-07-26 10:35:45.614799] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:33.185 [2024-07-26 10:35:45.965105] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:33.185 [2024-07-26 10:35:45.965319] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:33.443 [2024-07-26 10:35:46.182697] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.701 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.701 [2024-07-26 10:35:46.417450] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:33.958 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.959 "name": "raid_bdev1", 00:25:33.959 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:33.959 "strip_size_kb": 0, 00:25:33.959 "state": "online", 00:25:33.959 "raid_level": "raid1", 00:25:33.959 "superblock": false, 00:25:33.959 "num_base_bdevs": 2, 00:25:33.959 "num_base_bdevs_discovered": 2, 00:25:33.959 "num_base_bdevs_operational": 2, 00:25:33.959 "process": { 00:25:33.959 "type": "rebuild", 00:25:33.959 "target": "spare", 00:25:33.959 "progress": { 00:25:33.959 "blocks": 32768, 00:25:33.959 "percent": 50 00:25:33.959 } 00:25:33.959 }, 00:25:33.959 "base_bdevs_list": [ 00:25:33.959 { 00:25:33.959 "name": "spare", 00:25:33.959 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:33.959 "is_configured": true, 00:25:33.959 "data_offset": 0, 00:25:33.959 "data_size": 65536 00:25:33.959 }, 00:25:33.959 { 00:25:33.959 "name": "BaseBdev2", 00:25:33.959 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:33.959 "is_configured": true, 00:25:33.959 "data_offset": 0, 00:25:33.959 "data_size": 65536 00:25:33.959 } 00:25:33.959 ] 00:25:33.959 }' 00:25:33.959 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:33.959 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.959 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:33.959 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.959 10:35:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:34.216 [2024-07-26 10:35:46.869245] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:34.216 [2024-07-26 10:35:46.978734] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.149 "name": "raid_bdev1", 00:25:35.149 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:35.149 "strip_size_kb": 0, 00:25:35.149 "state": "online", 00:25:35.149 "raid_level": "raid1", 00:25:35.149 "superblock": false, 00:25:35.149 "num_base_bdevs": 2, 00:25:35.149 "num_base_bdevs_discovered": 2, 00:25:35.149 "num_base_bdevs_operational": 2, 00:25:35.149 "process": { 00:25:35.149 "type": "rebuild", 00:25:35.149 "target": "spare", 00:25:35.149 "progress": { 00:25:35.149 "blocks": 57344, 00:25:35.149 "percent": 87 00:25:35.149 } 00:25:35.149 }, 00:25:35.149 "base_bdevs_list": [ 00:25:35.149 { 00:25:35.149 "name": "spare", 00:25:35.149 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:35.149 "is_configured": true, 00:25:35.149 "data_offset": 0, 00:25:35.149 "data_size": 65536 00:25:35.149 }, 00:25:35.149 { 00:25:35.149 "name": "BaseBdev2", 00:25:35.149 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:35.149 "is_configured": true, 00:25:35.149 "data_offset": 0, 00:25:35.149 "data_size": 65536 00:25:35.149 } 00:25:35.149 ] 00:25:35.149 }' 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.149 10:35:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:35.149 10:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.149 10:35:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:35.714 [2024-07-26 10:35:48.340651] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:35.714 [2024-07-26 10:35:48.440895] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:35.714 [2024-07-26 10:35:48.450039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.279 "name": "raid_bdev1", 00:25:36.279 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:36.279 "strip_size_kb": 0, 00:25:36.279 "state": "online", 00:25:36.279 "raid_level": "raid1", 00:25:36.279 "superblock": false, 00:25:36.279 "num_base_bdevs": 2, 00:25:36.279 "num_base_bdevs_discovered": 2, 00:25:36.279 "num_base_bdevs_operational": 2, 00:25:36.279 "base_bdevs_list": [ 00:25:36.279 { 00:25:36.279 "name": "spare", 00:25:36.279 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:36.279 "is_configured": true, 00:25:36.279 "data_offset": 0, 00:25:36.279 "data_size": 65536 00:25:36.279 }, 00:25:36.279 { 00:25:36.279 "name": "BaseBdev2", 00:25:36.279 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:36.279 "is_configured": true, 00:25:36.279 "data_offset": 0, 00:25:36.279 "data_size": 65536 00:25:36.279 } 00:25:36.279 ] 00:25:36.279 }' 00:25:36.279 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.537 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:36.795 "name": "raid_bdev1", 00:25:36.795 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:36.795 "strip_size_kb": 0, 00:25:36.795 "state": "online", 00:25:36.795 "raid_level": "raid1", 00:25:36.795 "superblock": false, 00:25:36.795 "num_base_bdevs": 2, 00:25:36.795 "num_base_bdevs_discovered": 2, 00:25:36.795 "num_base_bdevs_operational": 2, 00:25:36.795 "base_bdevs_list": [ 00:25:36.795 { 00:25:36.795 "name": "spare", 00:25:36.795 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:36.795 "is_configured": true, 00:25:36.795 "data_offset": 0, 00:25:36.795 "data_size": 65536 00:25:36.795 }, 00:25:36.795 { 00:25:36.795 "name": "BaseBdev2", 00:25:36.795 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:36.795 "is_configured": true, 00:25:36.795 "data_offset": 0, 00:25:36.795 "data_size": 65536 00:25:36.795 } 00:25:36.795 ] 00:25:36.795 }' 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.795 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.053 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.053 "name": "raid_bdev1", 00:25:37.053 "uuid": "b9e33c0d-aa28-435f-84af-3f385356d7ba", 00:25:37.053 "strip_size_kb": 0, 00:25:37.053 "state": "online", 00:25:37.053 "raid_level": "raid1", 00:25:37.053 "superblock": false, 00:25:37.053 "num_base_bdevs": 2, 00:25:37.053 "num_base_bdevs_discovered": 2, 00:25:37.053 "num_base_bdevs_operational": 2, 00:25:37.053 "base_bdevs_list": [ 00:25:37.053 { 00:25:37.053 "name": "spare", 00:25:37.053 "uuid": "102f5258-2bf9-5e96-bdec-a38caab91ac9", 00:25:37.053 "is_configured": true, 00:25:37.053 "data_offset": 0, 00:25:37.053 "data_size": 65536 00:25:37.053 }, 00:25:37.053 { 00:25:37.053 "name": "BaseBdev2", 00:25:37.053 "uuid": "1bb01c33-fa37-5472-bf51-08acd182d2e9", 00:25:37.053 "is_configured": true, 00:25:37.053 "data_offset": 0, 00:25:37.053 "data_size": 65536 00:25:37.053 } 00:25:37.053 ] 00:25:37.053 }' 00:25:37.053 10:35:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.053 10:35:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:37.618 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:37.876 [2024-07-26 10:35:50.533279] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.876 [2024-07-26 10:35:50.533306] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.876 00:25:37.876 Latency(us) 00:25:37.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.876 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:37.876 raid_bdev1 : 11.19 105.04 315.11 0.00 0.00 12224.17 265.42 116601.65 00:25:37.876 =================================================================================================================== 00:25:37.876 Total : 105.04 315.11 0.00 0.00 12224.17 265.42 116601.65 00:25:37.876 [2024-07-26 10:35:50.637210] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.876 [2024-07-26 10:35:50.637237] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.876 [2024-07-26 10:35:50.637303] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.876 [2024-07-26 10:35:50.637314] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf58250 name raid_bdev1, state offline 00:25:37.876 0 00:25:37.876 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.876 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:38.134 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.135 10:35:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:38.392 /dev/nbd0 00:25:38.392 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:38.392 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:38.392 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:38.392 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:38.392 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:38.393 1+0 records in 00:25:38.393 1+0 records out 00:25:38.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259583 s, 15.8 MB/s 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.393 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:38.651 /dev/nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:38.651 1+0 records in 00:25:38.651 1+0 records out 00:25:38.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268269 s, 15.3 MB/s 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.651 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.909 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 3482574 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 3482574 ']' 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 3482574 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3482574 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3482574' 00:25:39.167 killing process with pid 3482574 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 3482574 00:25:39.167 Received shutdown signal, test time was about 12.541382 seconds 00:25:39.167 00:25:39.167 Latency(us) 00:25:39.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.167 =================================================================================================================== 00:25:39.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.167 [2024-07-26 10:35:51.991571] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:39.167 10:35:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 3482574 00:25:39.167 [2024-07-26 10:35:52.009894] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:25:39.425 00:25:39.425 real 0m16.884s 00:25:39.425 user 0m25.446s 00:25:39.425 sys 0m2.631s 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:39.425 ************************************ 00:25:39.425 END TEST raid_rebuild_test_io 00:25:39.425 ************************************ 00:25:39.425 10:35:52 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:25:39.425 10:35:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:39.425 10:35:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.425 10:35:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.425 ************************************ 00:25:39.425 START TEST raid_rebuild_test_sb_io 00:25:39.425 ************************************ 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3485587 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3485587 /var/tmp/spdk-raid.sock 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 3485587 ']' 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:39.425 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.426 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:39.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:39.426 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.426 10:35:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 [2024-07-26 10:35:52.343790] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:25:39.684 [2024-07-26 10:35:52.343847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485587 ] 00:25:39.684 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:39.684 Zero copy mechanism will not be used. 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:39.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:39.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:39.684 [2024-07-26 10:35:52.477658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.684 [2024-07-26 10:35:52.522103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.684 [2024-07-26 10:35:52.579560] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.684 [2024-07-26 10:35:52.579596] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.615 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.615 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:25:40.615 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:40.615 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:40.615 BaseBdev1_malloc 00:25:40.615 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:40.871 [2024-07-26 10:35:53.690100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:40.871 [2024-07-26 10:35:53.690150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.871 [2024-07-26 10:35:53.690173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ba370 00:25:40.871 [2024-07-26 10:35:53.690184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.871 [2024-07-26 10:35:53.691612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.871 [2024-07-26 10:35:53.691639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:40.871 BaseBdev1 00:25:40.871 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:40.871 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:41.128 BaseBdev2_malloc 00:25:41.128 10:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.385 [2024-07-26 10:35:54.147852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.385 [2024-07-26 10:35:54.147892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.385 [2024-07-26 10:35:54.147910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23760d0 00:25:41.385 [2024-07-26 10:35:54.147921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.385 [2024-07-26 10:35:54.149352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.385 [2024-07-26 10:35:54.149378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.385 BaseBdev2 00:25:41.385 10:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:41.642 spare_malloc 00:25:41.642 10:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:41.899 spare_delay 00:25:41.899 10:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:42.156 [2024-07-26 10:35:54.834046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:42.156 [2024-07-26 10:35:54.834089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.156 [2024-07-26 10:35:54.834107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2364070 00:25:42.156 [2024-07-26 10:35:54.834119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.156 [2024-07-26 10:35:54.835512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.156 [2024-07-26 10:35:54.835538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:42.156 spare 00:25:42.156 10:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:25:42.156 [2024-07-26 10:35:55.050634] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.156 [2024-07-26 10:35:55.051771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:42.156 [2024-07-26 10:35:55.051903] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2365250 00:25:42.156 [2024-07-26 10:35:55.051915] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:42.156 [2024-07-26 10:35:55.052087] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x221f4f0 00:25:42.156 [2024-07-26 10:35:55.052217] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2365250 00:25:42.156 [2024-07-26 10:35:55.052227] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2365250 00:25:42.156 [2024-07-26 10:35:55.052326] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.414 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.414 "name": "raid_bdev1", 00:25:42.414 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:42.414 "strip_size_kb": 0, 00:25:42.414 "state": "online", 00:25:42.414 "raid_level": "raid1", 00:25:42.414 "superblock": true, 00:25:42.414 "num_base_bdevs": 2, 00:25:42.414 "num_base_bdevs_discovered": 2, 00:25:42.414 "num_base_bdevs_operational": 2, 00:25:42.414 "base_bdevs_list": [ 00:25:42.414 { 00:25:42.414 "name": "BaseBdev1", 00:25:42.414 "uuid": "68f20aaa-48a6-5df0-aaae-9c6a6a2670f7", 00:25:42.414 "is_configured": true, 00:25:42.414 "data_offset": 2048, 00:25:42.415 "data_size": 63488 00:25:42.415 }, 00:25:42.415 { 00:25:42.415 "name": "BaseBdev2", 00:25:42.415 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:42.415 "is_configured": true, 00:25:42.415 "data_offset": 2048, 00:25:42.415 "data_size": 63488 00:25:42.415 } 00:25:42.415 ] 00:25:42.415 }' 00:25:42.415 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.415 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:42.980 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:42.980 10:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:43.237 [2024-07-26 10:35:56.077627] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:43.237 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:25:43.237 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.237 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:43.495 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:25:43.495 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:43.495 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:43.495 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:43.755 [2024-07-26 10:35:56.436295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2362f90 00:25:43.755 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:43.755 Zero copy mechanism will not be used. 00:25:43.755 Running I/O for 60 seconds... 00:25:43.755 [2024-07-26 10:35:56.544104] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:43.755 [2024-07-26 10:35:56.551628] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2362f90 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.755 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.025 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.025 "name": "raid_bdev1", 00:25:44.025 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:44.025 "strip_size_kb": 0, 00:25:44.025 "state": "online", 00:25:44.025 "raid_level": "raid1", 00:25:44.025 "superblock": true, 00:25:44.025 "num_base_bdevs": 2, 00:25:44.025 "num_base_bdevs_discovered": 1, 00:25:44.025 "num_base_bdevs_operational": 1, 00:25:44.025 "base_bdevs_list": [ 00:25:44.025 { 00:25:44.025 "name": null, 00:25:44.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.025 "is_configured": false, 00:25:44.025 "data_offset": 2048, 00:25:44.025 "data_size": 63488 00:25:44.025 }, 00:25:44.025 { 00:25:44.025 "name": "BaseBdev2", 00:25:44.025 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:44.025 "is_configured": true, 00:25:44.025 "data_offset": 2048, 00:25:44.025 "data_size": 63488 00:25:44.025 } 00:25:44.025 ] 00:25:44.025 }' 00:25:44.025 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.025 10:35:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:44.589 10:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:44.847 [2024-07-26 10:35:57.565930] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.847 10:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.847 [2024-07-26 10:35:57.619175] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2368340 00:25:44.847 [2024-07-26 10:35:57.621473] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:44.847 [2024-07-26 10:35:57.731281] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:44.847 [2024-07-26 10:35:57.731546] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:45.104 [2024-07-26 10:35:57.863979] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:45.104 [2024-07-26 10:35:57.864101] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:45.361 [2024-07-26 10:35:58.206564] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:45.361 [2024-07-26 10:35:58.206815] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:45.618 [2024-07-26 10:35:58.416797] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.875 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.875 [2024-07-26 10:35:58.755159] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:45.875 [2024-07-26 10:35:58.755320] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:46.132 "name": "raid_bdev1", 00:25:46.132 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:46.132 "strip_size_kb": 0, 00:25:46.132 "state": "online", 00:25:46.132 "raid_level": "raid1", 00:25:46.132 "superblock": true, 00:25:46.132 "num_base_bdevs": 2, 00:25:46.132 "num_base_bdevs_discovered": 2, 00:25:46.132 "num_base_bdevs_operational": 2, 00:25:46.132 "process": { 00:25:46.132 "type": "rebuild", 00:25:46.132 "target": "spare", 00:25:46.132 "progress": { 00:25:46.132 "blocks": 16384, 00:25:46.132 "percent": 25 00:25:46.132 } 00:25:46.132 }, 00:25:46.132 "base_bdevs_list": [ 00:25:46.132 { 00:25:46.132 "name": "spare", 00:25:46.132 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:46.132 "is_configured": true, 00:25:46.132 "data_offset": 2048, 00:25:46.132 "data_size": 63488 00:25:46.132 }, 00:25:46.132 { 00:25:46.132 "name": "BaseBdev2", 00:25:46.132 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:46.132 "is_configured": true, 00:25:46.132 "data_offset": 2048, 00:25:46.132 "data_size": 63488 00:25:46.132 } 00:25:46.132 ] 00:25:46.132 }' 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:46.132 10:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:46.389 [2024-07-26 10:35:59.135435] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:46.389 [2024-07-26 10:35:59.184327] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:46.389 [2024-07-26 10:35:59.184532] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:46.647 [2024-07-26 10:35:59.293180] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:46.647 [2024-07-26 10:35:59.302284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.647 [2024-07-26 10:35:59.302309] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:46.647 [2024-07-26 10:35:59.302318] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:46.647 [2024-07-26 10:35:59.322837] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2362f90 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.647 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.905 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.905 "name": "raid_bdev1", 00:25:46.905 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:46.905 "strip_size_kb": 0, 00:25:46.905 "state": "online", 00:25:46.905 "raid_level": "raid1", 00:25:46.905 "superblock": true, 00:25:46.905 "num_base_bdevs": 2, 00:25:46.905 "num_base_bdevs_discovered": 1, 00:25:46.905 "num_base_bdevs_operational": 1, 00:25:46.905 "base_bdevs_list": [ 00:25:46.905 { 00:25:46.905 "name": null, 00:25:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.905 "is_configured": false, 00:25:46.905 "data_offset": 2048, 00:25:46.905 "data_size": 63488 00:25:46.905 }, 00:25:46.905 { 00:25:46.905 "name": "BaseBdev2", 00:25:46.905 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:46.905 "is_configured": true, 00:25:46.905 "data_offset": 2048, 00:25:46.905 "data_size": 63488 00:25:46.905 } 00:25:46.905 ] 00:25:46.905 }' 00:25:46.905 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.905 10:35:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.469 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:47.727 "name": "raid_bdev1", 00:25:47.727 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:47.727 "strip_size_kb": 0, 00:25:47.727 "state": "online", 00:25:47.727 "raid_level": "raid1", 00:25:47.727 "superblock": true, 00:25:47.727 "num_base_bdevs": 2, 00:25:47.727 "num_base_bdevs_discovered": 1, 00:25:47.727 "num_base_bdevs_operational": 1, 00:25:47.727 "base_bdevs_list": [ 00:25:47.727 { 00:25:47.727 "name": null, 00:25:47.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.727 "is_configured": false, 00:25:47.727 "data_offset": 2048, 00:25:47.727 "data_size": 63488 00:25:47.727 }, 00:25:47.727 { 00:25:47.727 "name": "BaseBdev2", 00:25:47.727 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:47.727 "is_configured": true, 00:25:47.727 "data_offset": 2048, 00:25:47.727 "data_size": 63488 00:25:47.727 } 00:25:47.727 ] 00:25:47.727 }' 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:47.727 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:47.985 [2024-07-26 10:36:00.720881] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:47.985 10:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:47.985 [2024-07-26 10:36:00.797477] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2365820 00:25:47.985 [2024-07-26 10:36:00.798814] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:48.278 [2024-07-26 10:36:00.923438] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:48.278 [2024-07-26 10:36:00.923796] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:48.278 [2024-07-26 10:36:01.149223] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:48.278 [2024-07-26 10:36:01.149451] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:48.843 [2024-07-26 10:36:01.484611] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:48.843 [2024-07-26 10:36:01.594431] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:48.843 [2024-07-26 10:36:01.594618] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.101 10:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.359 "name": "raid_bdev1", 00:25:49.359 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:49.359 "strip_size_kb": 0, 00:25:49.359 "state": "online", 00:25:49.359 "raid_level": "raid1", 00:25:49.359 "superblock": true, 00:25:49.359 "num_base_bdevs": 2, 00:25:49.359 "num_base_bdevs_discovered": 2, 00:25:49.359 "num_base_bdevs_operational": 2, 00:25:49.359 "process": { 00:25:49.359 "type": "rebuild", 00:25:49.359 "target": "spare", 00:25:49.359 "progress": { 00:25:49.359 "blocks": 14336, 00:25:49.359 "percent": 22 00:25:49.359 } 00:25:49.359 }, 00:25:49.359 "base_bdevs_list": [ 00:25:49.359 { 00:25:49.359 "name": "spare", 00:25:49.359 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:49.359 "is_configured": true, 00:25:49.359 "data_offset": 2048, 00:25:49.359 "data_size": 63488 00:25:49.359 }, 00:25:49.359 { 00:25:49.359 "name": "BaseBdev2", 00:25:49.359 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:49.359 "is_configured": true, 00:25:49.359 "data_offset": 2048, 00:25:49.359 "data_size": 63488 00:25:49.359 } 00:25:49.359 ] 00:25:49.359 }' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:49.359 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=806 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.359 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.617 [2024-07-26 10:36:02.304714] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.617 "name": "raid_bdev1", 00:25:49.617 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:49.617 "strip_size_kb": 0, 00:25:49.617 "state": "online", 00:25:49.617 "raid_level": "raid1", 00:25:49.617 "superblock": true, 00:25:49.617 "num_base_bdevs": 2, 00:25:49.617 "num_base_bdevs_discovered": 2, 00:25:49.617 "num_base_bdevs_operational": 2, 00:25:49.617 "process": { 00:25:49.617 "type": "rebuild", 00:25:49.617 "target": "spare", 00:25:49.617 "progress": { 00:25:49.617 "blocks": 20480, 00:25:49.617 "percent": 32 00:25:49.617 } 00:25:49.617 }, 00:25:49.617 "base_bdevs_list": [ 00:25:49.617 { 00:25:49.617 "name": "spare", 00:25:49.617 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:49.617 "is_configured": true, 00:25:49.617 "data_offset": 2048, 00:25:49.617 "data_size": 63488 00:25:49.617 }, 00:25:49.617 { 00:25:49.617 "name": "BaseBdev2", 00:25:49.617 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:49.617 "is_configured": true, 00:25:49.617 "data_offset": 2048, 00:25:49.617 "data_size": 63488 00:25:49.617 } 00:25:49.617 ] 00:25:49.617 }' 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.617 10:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:49.875 [2024-07-26 10:36:02.553415] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:50.133 [2024-07-26 10:36:02.889094] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:50.133 [2024-07-26 10:36:03.022833] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:50.391 [2024-07-26 10:36:03.242161] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:50.649 [2024-07-26 10:36:03.367471] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.649 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:50.907 "name": "raid_bdev1", 00:25:50.907 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:50.907 "strip_size_kb": 0, 00:25:50.907 "state": "online", 00:25:50.907 "raid_level": "raid1", 00:25:50.907 "superblock": true, 00:25:50.907 "num_base_bdevs": 2, 00:25:50.907 "num_base_bdevs_discovered": 2, 00:25:50.907 "num_base_bdevs_operational": 2, 00:25:50.907 "process": { 00:25:50.907 "type": "rebuild", 00:25:50.907 "target": "spare", 00:25:50.907 "progress": { 00:25:50.907 "blocks": 38912, 00:25:50.907 "percent": 61 00:25:50.907 } 00:25:50.907 }, 00:25:50.907 "base_bdevs_list": [ 00:25:50.907 { 00:25:50.907 "name": "spare", 00:25:50.907 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:50.907 "is_configured": true, 00:25:50.907 "data_offset": 2048, 00:25:50.907 "data_size": 63488 00:25:50.907 }, 00:25:50.907 { 00:25:50.907 "name": "BaseBdev2", 00:25:50.907 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:50.907 "is_configured": true, 00:25:50.907 "data_offset": 2048, 00:25:50.907 "data_size": 63488 00:25:50.907 } 00:25:50.907 ] 00:25:50.907 }' 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.907 10:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:51.843 [2024-07-26 10:36:04.392259] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:52.102 "name": "raid_bdev1", 00:25:52.102 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:52.102 "strip_size_kb": 0, 00:25:52.102 "state": "online", 00:25:52.102 "raid_level": "raid1", 00:25:52.102 "superblock": true, 00:25:52.102 "num_base_bdevs": 2, 00:25:52.102 "num_base_bdevs_discovered": 2, 00:25:52.102 "num_base_bdevs_operational": 2, 00:25:52.102 "process": { 00:25:52.102 "type": "rebuild", 00:25:52.102 "target": "spare", 00:25:52.102 "progress": { 00:25:52.102 "blocks": 61440, 00:25:52.102 "percent": 96 00:25:52.102 } 00:25:52.102 }, 00:25:52.102 "base_bdevs_list": [ 00:25:52.102 { 00:25:52.102 "name": "spare", 00:25:52.102 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:52.102 "is_configured": true, 00:25:52.102 "data_offset": 2048, 00:25:52.102 "data_size": 63488 00:25:52.102 }, 00:25:52.102 { 00:25:52.102 "name": "BaseBdev2", 00:25:52.102 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:52.102 "is_configured": true, 00:25:52.102 "data_offset": 2048, 00:25:52.102 "data_size": 63488 00:25:52.102 } 00:25:52.102 ] 00:25:52.102 }' 00:25:52.102 10:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.361 10:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.361 10:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.361 [2024-07-26 10:36:05.045703] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:52.361 10:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.361 10:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:52.361 [2024-07-26 10:36:05.153326] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:52.361 [2024-07-26 10:36:05.154899] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.298 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.557 "name": "raid_bdev1", 00:25:53.557 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:53.557 "strip_size_kb": 0, 00:25:53.557 "state": "online", 00:25:53.557 "raid_level": "raid1", 00:25:53.557 "superblock": true, 00:25:53.557 "num_base_bdevs": 2, 00:25:53.557 "num_base_bdevs_discovered": 2, 00:25:53.557 "num_base_bdevs_operational": 2, 00:25:53.557 "base_bdevs_list": [ 00:25:53.557 { 00:25:53.557 "name": "spare", 00:25:53.557 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:53.557 "is_configured": true, 00:25:53.557 "data_offset": 2048, 00:25:53.557 "data_size": 63488 00:25:53.557 }, 00:25:53.557 { 00:25:53.557 "name": "BaseBdev2", 00:25:53.557 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:53.557 "is_configured": true, 00:25:53.557 "data_offset": 2048, 00:25:53.557 "data_size": 63488 00:25:53.557 } 00:25:53.557 ] 00:25:53.557 }' 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.557 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.816 "name": "raid_bdev1", 00:25:53.816 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:53.816 "strip_size_kb": 0, 00:25:53.816 "state": "online", 00:25:53.816 "raid_level": "raid1", 00:25:53.816 "superblock": true, 00:25:53.816 "num_base_bdevs": 2, 00:25:53.816 "num_base_bdevs_discovered": 2, 00:25:53.816 "num_base_bdevs_operational": 2, 00:25:53.816 "base_bdevs_list": [ 00:25:53.816 { 00:25:53.816 "name": "spare", 00:25:53.816 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:53.816 "is_configured": true, 00:25:53.816 "data_offset": 2048, 00:25:53.816 "data_size": 63488 00:25:53.816 }, 00:25:53.816 { 00:25:53.816 "name": "BaseBdev2", 00:25:53.816 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:53.816 "is_configured": true, 00:25:53.816 "data_offset": 2048, 00:25:53.816 "data_size": 63488 00:25:53.816 } 00:25:53.816 ] 00:25:53.816 }' 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.816 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.077 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.077 "name": "raid_bdev1", 00:25:54.077 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:54.077 "strip_size_kb": 0, 00:25:54.077 "state": "online", 00:25:54.077 "raid_level": "raid1", 00:25:54.077 "superblock": true, 00:25:54.077 "num_base_bdevs": 2, 00:25:54.077 "num_base_bdevs_discovered": 2, 00:25:54.077 "num_base_bdevs_operational": 2, 00:25:54.077 "base_bdevs_list": [ 00:25:54.077 { 00:25:54.077 "name": "spare", 00:25:54.077 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:54.077 "is_configured": true, 00:25:54.077 "data_offset": 2048, 00:25:54.077 "data_size": 63488 00:25:54.077 }, 00:25:54.077 { 00:25:54.077 "name": "BaseBdev2", 00:25:54.077 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:54.077 "is_configured": true, 00:25:54.077 "data_offset": 2048, 00:25:54.077 "data_size": 63488 00:25:54.077 } 00:25:54.077 ] 00:25:54.077 }' 00:25:54.077 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.077 10:36:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:54.645 10:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:54.903 [2024-07-26 10:36:07.712570] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:54.903 [2024-07-26 10:36:07.712601] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.903 00:25:54.903 Latency(us) 00:25:54.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.904 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:54.904 raid_bdev1 : 11.31 112.11 336.32 0.00 0.00 12127.56 262.14 116601.65 00:25:54.904 =================================================================================================================== 00:25:54.904 Total : 112.11 336.32 0.00 0.00 12127.56 262.14 116601.65 00:25:54.904 [2024-07-26 10:36:07.780378] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.904 [2024-07-26 10:36:07.780404] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.904 [2024-07-26 10:36:07.780470] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.904 [2024-07-26 10:36:07.780482] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2365250 name raid_bdev1, state offline 00:25:54.904 0 00:25:54.904 10:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.904 10:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.162 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:55.420 /dev/nbd0 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:55.420 1+0 records in 00:25:55.420 1+0 records out 00:25:55.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260955 s, 15.7 MB/s 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.420 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:25:55.680 /dev/nbd1 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:55.680 1+0 records in 00:25:55.680 1+0 records out 00:25:55.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268962 s, 15.2 MB/s 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.680 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.982 10:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:56.240 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:56.498 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:56.756 [2024-07-26 10:36:09.561425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:56.756 [2024-07-26 10:36:09.561465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.756 [2024-07-26 10:36:09.561484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23642a0 00:25:56.756 [2024-07-26 10:36:09.561495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.756 [2024-07-26 10:36:09.562957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.756 [2024-07-26 10:36:09.562983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:56.756 [2024-07-26 10:36:09.563047] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:56.756 [2024-07-26 10:36:09.563070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:56.756 [2024-07-26 10:36:09.563171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.756 spare 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.756 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.757 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.757 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.015 [2024-07-26 10:36:09.663476] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23b9710 00:25:57.015 [2024-07-26 10:36:09.663495] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:57.015 [2024-07-26 10:36:09.663652] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21f4ea0 00:25:57.015 [2024-07-26 10:36:09.663789] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23b9710 00:25:57.015 [2024-07-26 10:36:09.663798] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23b9710 00:25:57.015 [2024-07-26 10:36:09.663894] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.015 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.015 "name": "raid_bdev1", 00:25:57.015 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:57.015 "strip_size_kb": 0, 00:25:57.015 "state": "online", 00:25:57.015 "raid_level": "raid1", 00:25:57.015 "superblock": true, 00:25:57.015 "num_base_bdevs": 2, 00:25:57.015 "num_base_bdevs_discovered": 2, 00:25:57.015 "num_base_bdevs_operational": 2, 00:25:57.015 "base_bdevs_list": [ 00:25:57.015 { 00:25:57.015 "name": "spare", 00:25:57.015 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:57.015 "is_configured": true, 00:25:57.015 "data_offset": 2048, 00:25:57.015 "data_size": 63488 00:25:57.015 }, 00:25:57.015 { 00:25:57.015 "name": "BaseBdev2", 00:25:57.015 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:57.015 "is_configured": true, 00:25:57.015 "data_offset": 2048, 00:25:57.015 "data_size": 63488 00:25:57.015 } 00:25:57.015 ] 00:25:57.015 }' 00:25:57.015 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.015 10:36:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:57.581 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.581 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:57.581 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:57.581 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:57.582 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:57.582 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.582 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.841 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:57.841 "name": "raid_bdev1", 00:25:57.841 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:57.841 "strip_size_kb": 0, 00:25:57.841 "state": "online", 00:25:57.841 "raid_level": "raid1", 00:25:57.841 "superblock": true, 00:25:57.841 "num_base_bdevs": 2, 00:25:57.841 "num_base_bdevs_discovered": 2, 00:25:57.841 "num_base_bdevs_operational": 2, 00:25:57.841 "base_bdevs_list": [ 00:25:57.841 { 00:25:57.841 "name": "spare", 00:25:57.841 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:25:57.841 "is_configured": true, 00:25:57.841 "data_offset": 2048, 00:25:57.841 "data_size": 63488 00:25:57.841 }, 00:25:57.841 { 00:25:57.841 "name": "BaseBdev2", 00:25:57.841 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:57.841 "is_configured": true, 00:25:57.841 "data_offset": 2048, 00:25:57.842 "data_size": 63488 00:25:57.842 } 00:25:57.842 ] 00:25:57.842 }' 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.842 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:58.100 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.100 10:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:58.359 [2024-07-26 10:36:11.153900] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.359 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.617 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.617 "name": "raid_bdev1", 00:25:58.617 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:25:58.617 "strip_size_kb": 0, 00:25:58.617 "state": "online", 00:25:58.617 "raid_level": "raid1", 00:25:58.617 "superblock": true, 00:25:58.617 "num_base_bdevs": 2, 00:25:58.617 "num_base_bdevs_discovered": 1, 00:25:58.617 "num_base_bdevs_operational": 1, 00:25:58.617 "base_bdevs_list": [ 00:25:58.617 { 00:25:58.618 "name": null, 00:25:58.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.618 "is_configured": false, 00:25:58.618 "data_offset": 2048, 00:25:58.618 "data_size": 63488 00:25:58.618 }, 00:25:58.618 { 00:25:58.618 "name": "BaseBdev2", 00:25:58.618 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:25:58.618 "is_configured": true, 00:25:58.618 "data_offset": 2048, 00:25:58.618 "data_size": 63488 00:25:58.618 } 00:25:58.618 ] 00:25:58.618 }' 00:25:58.618 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.618 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:59.186 10:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:59.445 [2024-07-26 10:36:12.124594] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:59.445 [2024-07-26 10:36:12.124721] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:59.445 [2024-07-26 10:36:12.124736] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:59.445 [2024-07-26 10:36:12.124764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:59.445 [2024-07-26 10:36:12.129686] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2363330 00:25:59.445 [2024-07-26 10:36:12.131609] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:59.445 10:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.381 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:00.644 "name": "raid_bdev1", 00:26:00.644 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:00.644 "strip_size_kb": 0, 00:26:00.644 "state": "online", 00:26:00.644 "raid_level": "raid1", 00:26:00.644 "superblock": true, 00:26:00.644 "num_base_bdevs": 2, 00:26:00.644 "num_base_bdevs_discovered": 2, 00:26:00.644 "num_base_bdevs_operational": 2, 00:26:00.644 "process": { 00:26:00.644 "type": "rebuild", 00:26:00.644 "target": "spare", 00:26:00.644 "progress": { 00:26:00.644 "blocks": 24576, 00:26:00.644 "percent": 38 00:26:00.644 } 00:26:00.644 }, 00:26:00.644 "base_bdevs_list": [ 00:26:00.644 { 00:26:00.644 "name": "spare", 00:26:00.644 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:26:00.644 "is_configured": true, 00:26:00.644 "data_offset": 2048, 00:26:00.644 "data_size": 63488 00:26:00.644 }, 00:26:00.644 { 00:26:00.644 "name": "BaseBdev2", 00:26:00.644 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:00.644 "is_configured": true, 00:26:00.644 "data_offset": 2048, 00:26:00.644 "data_size": 63488 00:26:00.644 } 00:26:00.644 ] 00:26:00.644 }' 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.644 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:00.903 [2024-07-26 10:36:13.678826] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:00.903 [2024-07-26 10:36:13.743379] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:00.903 [2024-07-26 10:36:13.743422] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.903 [2024-07-26 10:36:13.743436] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:00.903 [2024-07-26 10:36:13.743443] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.903 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.161 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.162 "name": "raid_bdev1", 00:26:01.162 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:01.162 "strip_size_kb": 0, 00:26:01.162 "state": "online", 00:26:01.162 "raid_level": "raid1", 00:26:01.162 "superblock": true, 00:26:01.162 "num_base_bdevs": 2, 00:26:01.162 "num_base_bdevs_discovered": 1, 00:26:01.162 "num_base_bdevs_operational": 1, 00:26:01.162 "base_bdevs_list": [ 00:26:01.162 { 00:26:01.162 "name": null, 00:26:01.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.162 "is_configured": false, 00:26:01.162 "data_offset": 2048, 00:26:01.162 "data_size": 63488 00:26:01.162 }, 00:26:01.162 { 00:26:01.162 "name": "BaseBdev2", 00:26:01.162 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:01.162 "is_configured": true, 00:26:01.162 "data_offset": 2048, 00:26:01.162 "data_size": 63488 00:26:01.162 } 00:26:01.162 ] 00:26:01.162 }' 00:26:01.162 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.162 10:36:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:01.729 10:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:01.988 [2024-07-26 10:36:14.766510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:01.988 [2024-07-26 10:36:14.766557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.988 [2024-07-26 10:36:14.766577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23b6d10 00:26:01.988 [2024-07-26 10:36:14.766589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.988 [2024-07-26 10:36:14.766919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.988 [2024-07-26 10:36:14.766934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:01.988 [2024-07-26 10:36:14.767006] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:01.988 [2024-07-26 10:36:14.767016] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:01.988 [2024-07-26 10:36:14.767026] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:01.988 [2024-07-26 10:36:14.767043] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:01.988 [2024-07-26 10:36:14.772300] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21f4ea0 00:26:01.988 spare 00:26:01.988 [2024-07-26 10:36:14.773560] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:01.988 10:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.925 10:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.184 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.184 "name": "raid_bdev1", 00:26:03.184 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:03.184 "strip_size_kb": 0, 00:26:03.184 "state": "online", 00:26:03.184 "raid_level": "raid1", 00:26:03.184 "superblock": true, 00:26:03.184 "num_base_bdevs": 2, 00:26:03.184 "num_base_bdevs_discovered": 2, 00:26:03.184 "num_base_bdevs_operational": 2, 00:26:03.184 "process": { 00:26:03.184 "type": "rebuild", 00:26:03.184 "target": "spare", 00:26:03.184 "progress": { 00:26:03.184 "blocks": 24576, 00:26:03.184 "percent": 38 00:26:03.184 } 00:26:03.184 }, 00:26:03.184 "base_bdevs_list": [ 00:26:03.184 { 00:26:03.184 "name": "spare", 00:26:03.184 "uuid": "7a699a80-5d22-5d34-a48c-30d2af80a63d", 00:26:03.184 "is_configured": true, 00:26:03.184 "data_offset": 2048, 00:26:03.184 "data_size": 63488 00:26:03.184 }, 00:26:03.184 { 00:26:03.184 "name": "BaseBdev2", 00:26:03.184 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:03.184 "is_configured": true, 00:26:03.184 "data_offset": 2048, 00:26:03.184 "data_size": 63488 00:26:03.184 } 00:26:03.184 ] 00:26:03.184 }' 00:26:03.184 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:03.184 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.184 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:03.442 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.443 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:03.443 [2024-07-26 10:36:16.321109] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:03.704 [2024-07-26 10:36:16.385248] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:03.704 [2024-07-26 10:36:16.385294] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.704 [2024-07-26 10:36:16.385308] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:03.704 [2024-07-26 10:36:16.385316] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.704 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.964 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.964 "name": "raid_bdev1", 00:26:03.964 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:03.964 "strip_size_kb": 0, 00:26:03.964 "state": "online", 00:26:03.964 "raid_level": "raid1", 00:26:03.964 "superblock": true, 00:26:03.964 "num_base_bdevs": 2, 00:26:03.964 "num_base_bdevs_discovered": 1, 00:26:03.964 "num_base_bdevs_operational": 1, 00:26:03.964 "base_bdevs_list": [ 00:26:03.964 { 00:26:03.964 "name": null, 00:26:03.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.964 "is_configured": false, 00:26:03.964 "data_offset": 2048, 00:26:03.964 "data_size": 63488 00:26:03.964 }, 00:26:03.964 { 00:26:03.964 "name": "BaseBdev2", 00:26:03.964 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:03.964 "is_configured": true, 00:26:03.964 "data_offset": 2048, 00:26:03.964 "data_size": 63488 00:26:03.964 } 00:26:03.964 ] 00:26:03.964 }' 00:26:03.964 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.964 10:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:04.531 "name": "raid_bdev1", 00:26:04.531 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:04.531 "strip_size_kb": 0, 00:26:04.531 "state": "online", 00:26:04.531 "raid_level": "raid1", 00:26:04.531 "superblock": true, 00:26:04.531 "num_base_bdevs": 2, 00:26:04.531 "num_base_bdevs_discovered": 1, 00:26:04.531 "num_base_bdevs_operational": 1, 00:26:04.531 "base_bdevs_list": [ 00:26:04.531 { 00:26:04.531 "name": null, 00:26:04.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.531 "is_configured": false, 00:26:04.531 "data_offset": 2048, 00:26:04.531 "data_size": 63488 00:26:04.531 }, 00:26:04.531 { 00:26:04.531 "name": "BaseBdev2", 00:26:04.531 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:04.531 "is_configured": true, 00:26:04.531 "data_offset": 2048, 00:26:04.531 "data_size": 63488 00:26:04.531 } 00:26:04.531 ] 00:26:04.531 }' 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:04.531 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:04.790 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:04.790 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:05.048 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:05.048 [2024-07-26 10:36:17.905929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:05.048 [2024-07-26 10:36:17.905975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.048 [2024-07-26 10:36:17.905993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23ba6b0 00:26:05.048 [2024-07-26 10:36:17.906005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.048 [2024-07-26 10:36:17.906320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.048 [2024-07-26 10:36:17.906336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:05.048 [2024-07-26 10:36:17.906392] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:05.048 [2024-07-26 10:36:17.906403] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:05.048 [2024-07-26 10:36:17.906412] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:05.048 BaseBdev1 00:26:05.048 10:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.429 10:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.429 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.429 "name": "raid_bdev1", 00:26:06.429 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:06.429 "strip_size_kb": 0, 00:26:06.429 "state": "online", 00:26:06.429 "raid_level": "raid1", 00:26:06.429 "superblock": true, 00:26:06.429 "num_base_bdevs": 2, 00:26:06.429 "num_base_bdevs_discovered": 1, 00:26:06.429 "num_base_bdevs_operational": 1, 00:26:06.429 "base_bdevs_list": [ 00:26:06.429 { 00:26:06.429 "name": null, 00:26:06.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.429 "is_configured": false, 00:26:06.429 "data_offset": 2048, 00:26:06.429 "data_size": 63488 00:26:06.429 }, 00:26:06.429 { 00:26:06.429 "name": "BaseBdev2", 00:26:06.429 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:06.429 "is_configured": true, 00:26:06.429 "data_offset": 2048, 00:26:06.429 "data_size": 63488 00:26:06.429 } 00:26:06.429 ] 00:26:06.429 }' 00:26:06.429 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.429 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.996 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.253 10:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:07.253 "name": "raid_bdev1", 00:26:07.253 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:07.253 "strip_size_kb": 0, 00:26:07.253 "state": "online", 00:26:07.253 "raid_level": "raid1", 00:26:07.253 "superblock": true, 00:26:07.253 "num_base_bdevs": 2, 00:26:07.253 "num_base_bdevs_discovered": 1, 00:26:07.253 "num_base_bdevs_operational": 1, 00:26:07.253 "base_bdevs_list": [ 00:26:07.253 { 00:26:07.253 "name": null, 00:26:07.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.253 "is_configured": false, 00:26:07.253 "data_offset": 2048, 00:26:07.253 "data_size": 63488 00:26:07.253 }, 00:26:07.253 { 00:26:07.253 "name": "BaseBdev2", 00:26:07.253 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:07.253 "is_configured": true, 00:26:07.253 "data_offset": 2048, 00:26:07.253 "data_size": 63488 00:26:07.253 } 00:26:07.253 ] 00:26:07.253 }' 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:07.253 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:07.511 [2024-07-26 10:36:20.312728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.511 [2024-07-26 10:36:20.312837] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:07.511 [2024-07-26 10:36:20.312852] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:07.511 request: 00:26:07.511 { 00:26:07.511 "base_bdev": "BaseBdev1", 00:26:07.512 "raid_bdev": "raid_bdev1", 00:26:07.512 "method": "bdev_raid_add_base_bdev", 00:26:07.512 "req_id": 1 00:26:07.512 } 00:26:07.512 Got JSON-RPC error response 00:26:07.512 response: 00:26:07.512 { 00:26:07.512 "code": -22, 00:26:07.512 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:07.512 } 00:26:07.512 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:26:07.512 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:07.512 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:07.512 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:07.512 10:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.472 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.749 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.749 "name": "raid_bdev1", 00:26:08.749 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:08.749 "strip_size_kb": 0, 00:26:08.749 "state": "online", 00:26:08.749 "raid_level": "raid1", 00:26:08.749 "superblock": true, 00:26:08.749 "num_base_bdevs": 2, 00:26:08.749 "num_base_bdevs_discovered": 1, 00:26:08.749 "num_base_bdevs_operational": 1, 00:26:08.749 "base_bdevs_list": [ 00:26:08.749 { 00:26:08.749 "name": null, 00:26:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.749 "is_configured": false, 00:26:08.749 "data_offset": 2048, 00:26:08.749 "data_size": 63488 00:26:08.749 }, 00:26:08.749 { 00:26:08.749 "name": "BaseBdev2", 00:26:08.749 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:08.749 "is_configured": true, 00:26:08.749 "data_offset": 2048, 00:26:08.749 "data_size": 63488 00:26:08.749 } 00:26:08.749 ] 00:26:08.749 }' 00:26:08.749 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.749 10:36:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.316 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:09.576 "name": "raid_bdev1", 00:26:09.576 "uuid": "9febe928-312e-40fb-9316-c6938388cbc5", 00:26:09.576 "strip_size_kb": 0, 00:26:09.576 "state": "online", 00:26:09.576 "raid_level": "raid1", 00:26:09.576 "superblock": true, 00:26:09.576 "num_base_bdevs": 2, 00:26:09.576 "num_base_bdevs_discovered": 1, 00:26:09.576 "num_base_bdevs_operational": 1, 00:26:09.576 "base_bdevs_list": [ 00:26:09.576 { 00:26:09.576 "name": null, 00:26:09.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.576 "is_configured": false, 00:26:09.576 "data_offset": 2048, 00:26:09.576 "data_size": 63488 00:26:09.576 }, 00:26:09.576 { 00:26:09.576 "name": "BaseBdev2", 00:26:09.576 "uuid": "224d6450-4fe6-5c3d-af16-36a10654fab1", 00:26:09.576 "is_configured": true, 00:26:09.576 "data_offset": 2048, 00:26:09.576 "data_size": 63488 00:26:09.576 } 00:26:09.576 ] 00:26:09.576 }' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 3485587 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 3485587 ']' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 3485587 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.576 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3485587 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3485587' 00:26:09.835 killing process with pid 3485587 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 3485587 00:26:09.835 Received shutdown signal, test time was about 26.003817 seconds 00:26:09.835 00:26:09.835 Latency(us) 00:26:09.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.835 =================================================================================================================== 00:26:09.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.835 [2024-07-26 10:36:22.505503] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.835 [2024-07-26 10:36:22.505585] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.835 [2024-07-26 10:36:22.505626] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.835 [2024-07-26 10:36:22.505636] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b9710 name raid_bdev1, state offline 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 3485587 00:26:09.835 [2024-07-26 10:36:22.524072] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:26:09.835 00:26:09.835 real 0m30.427s 00:26:09.835 user 0m47.113s 00:26:09.835 sys 0m4.481s 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.835 10:36:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.835 ************************************ 00:26:09.835 END TEST raid_rebuild_test_sb_io 00:26:09.835 ************************************ 00:26:10.095 10:36:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:26:10.095 10:36:22 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:26:10.095 10:36:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:10.095 10:36:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.095 10:36:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.095 ************************************ 00:26:10.095 START TEST raid_rebuild_test 00:26:10.095 ************************************ 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=3491743 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 3491743 /var/tmp/spdk-raid.sock 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 3491743 ']' 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:10.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.095 10:36:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.095 [2024-07-26 10:36:22.858772] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:26:10.095 [2024-07-26 10:36:22.858827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491743 ] 00:26:10.095 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:10.095 Zero copy mechanism will not be used. 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:10.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.095 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:10.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:10.096 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:10.096 [2024-07-26 10:36:22.991846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.355 [2024-07-26 10:36:23.036322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.355 [2024-07-26 10:36:23.102926] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.355 [2024-07-26 10:36:23.102963] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.928 10:36:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.928 10:36:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:26:10.928 10:36:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:10.928 10:36:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:11.186 BaseBdev1_malloc 00:26:11.186 10:36:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:11.443 [2024-07-26 10:36:24.197897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:11.443 [2024-07-26 10:36:24.197941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.443 [2024-07-26 10:36:24.197963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x241b370 00:26:11.444 [2024-07-26 10:36:24.197975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.444 [2024-07-26 10:36:24.199398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.444 [2024-07-26 10:36:24.199424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:11.444 BaseBdev1 00:26:11.444 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:11.444 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:11.701 BaseBdev2_malloc 00:26:11.701 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:11.959 [2024-07-26 10:36:24.659483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:11.959 [2024-07-26 10:36:24.659523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.959 [2024-07-26 10:36:24.659542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d70d0 00:26:11.959 [2024-07-26 10:36:24.659553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.959 [2024-07-26 10:36:24.660976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.959 [2024-07-26 10:36:24.661002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:11.959 BaseBdev2 00:26:11.959 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:11.959 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:12.216 BaseBdev3_malloc 00:26:12.216 10:36:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:12.474 [2024-07-26 10:36:25.120992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:12.474 [2024-07-26 10:36:25.121032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.474 [2024-07-26 10:36:25.121051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23c29f0 00:26:12.474 [2024-07-26 10:36:25.121062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.474 [2024-07-26 10:36:25.122390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.474 [2024-07-26 10:36:25.122416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:12.474 BaseBdev3 00:26:12.474 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:12.474 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:12.474 BaseBdev4_malloc 00:26:12.474 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:12.732 [2024-07-26 10:36:25.578362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:12.732 [2024-07-26 10:36:25.578405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.732 [2024-07-26 10:36:25.578427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23c4270 00:26:12.732 [2024-07-26 10:36:25.578439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.732 [2024-07-26 10:36:25.579755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.732 [2024-07-26 10:36:25.579781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:12.732 BaseBdev4 00:26:12.732 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:12.990 spare_malloc 00:26:12.990 10:36:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:13.249 spare_delay 00:26:13.249 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:13.506 [2024-07-26 10:36:26.248396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:13.506 [2024-07-26 10:36:26.248435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.506 [2024-07-26 10:36:26.248458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23c6700 00:26:13.506 [2024-07-26 10:36:26.248470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.506 [2024-07-26 10:36:26.249825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.506 [2024-07-26 10:36:26.249851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:13.506 spare 00:26:13.506 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:13.764 [2024-07-26 10:36:26.473008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.764 [2024-07-26 10:36:26.474157] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.764 [2024-07-26 10:36:26.474209] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.764 [2024-07-26 10:36:26.474250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:13.764 [2024-07-26 10:36:26.474319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23c8840 00:26:13.764 [2024-07-26 10:36:26.474329] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:13.764 [2024-07-26 10:36:26.474524] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23cc180 00:26:13.764 [2024-07-26 10:36:26.474650] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23c8840 00:26:13.765 [2024-07-26 10:36:26.474659] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23c8840 00:26:13.765 [2024-07-26 10:36:26.474761] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.765 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.022 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.023 "name": "raid_bdev1", 00:26:14.023 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:14.023 "strip_size_kb": 0, 00:26:14.023 "state": "online", 00:26:14.023 "raid_level": "raid1", 00:26:14.023 "superblock": false, 00:26:14.023 "num_base_bdevs": 4, 00:26:14.023 "num_base_bdevs_discovered": 4, 00:26:14.023 "num_base_bdevs_operational": 4, 00:26:14.023 "base_bdevs_list": [ 00:26:14.023 { 00:26:14.023 "name": "BaseBdev1", 00:26:14.023 "uuid": "b9232e82-d846-525a-9ddf-a26479bc836d", 00:26:14.023 "is_configured": true, 00:26:14.023 "data_offset": 0, 00:26:14.023 "data_size": 65536 00:26:14.023 }, 00:26:14.023 { 00:26:14.023 "name": "BaseBdev2", 00:26:14.023 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:14.023 "is_configured": true, 00:26:14.023 "data_offset": 0, 00:26:14.023 "data_size": 65536 00:26:14.023 }, 00:26:14.023 { 00:26:14.023 "name": "BaseBdev3", 00:26:14.023 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:14.023 "is_configured": true, 00:26:14.023 "data_offset": 0, 00:26:14.023 "data_size": 65536 00:26:14.023 }, 00:26:14.023 { 00:26:14.023 "name": "BaseBdev4", 00:26:14.023 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:14.023 "is_configured": true, 00:26:14.023 "data_offset": 0, 00:26:14.023 "data_size": 65536 00:26:14.023 } 00:26:14.023 ] 00:26:14.023 }' 00:26:14.023 10:36:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.023 10:36:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.588 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:14.588 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:14.588 [2024-07-26 10:36:27.483921] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:14.846 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:15.104 [2024-07-26 10:36:27.940899] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x226abd0 00:26:15.104 /dev/nbd0 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:15.104 1+0 records in 00:26:15.104 1+0 records out 00:26:15.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245798 s, 16.7 MB/s 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:15.104 10:36:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:15.104 10:36:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:21.660 65536+0 records in 00:26:21.660 65536+0 records out 00:26:21.660 33554432 bytes (34 MB, 32 MiB) copied, 5.84508 s, 5.7 MB/s 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:21.660 10:36:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:21.660 [2024-07-26 10:36:34.100109] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:21.660 [2024-07-26 10:36:34.320700] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.660 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.918 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.918 "name": "raid_bdev1", 00:26:21.918 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:21.918 "strip_size_kb": 0, 00:26:21.918 "state": "online", 00:26:21.918 "raid_level": "raid1", 00:26:21.918 "superblock": false, 00:26:21.918 "num_base_bdevs": 4, 00:26:21.918 "num_base_bdevs_discovered": 3, 00:26:21.918 "num_base_bdevs_operational": 3, 00:26:21.918 "base_bdevs_list": [ 00:26:21.918 { 00:26:21.918 "name": null, 00:26:21.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.918 "is_configured": false, 00:26:21.918 "data_offset": 0, 00:26:21.918 "data_size": 65536 00:26:21.918 }, 00:26:21.918 { 00:26:21.918 "name": "BaseBdev2", 00:26:21.918 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:21.918 "is_configured": true, 00:26:21.918 "data_offset": 0, 00:26:21.918 "data_size": 65536 00:26:21.918 }, 00:26:21.918 { 00:26:21.918 "name": "BaseBdev3", 00:26:21.918 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:21.918 "is_configured": true, 00:26:21.918 "data_offset": 0, 00:26:21.918 "data_size": 65536 00:26:21.918 }, 00:26:21.918 { 00:26:21.918 "name": "BaseBdev4", 00:26:21.918 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:21.918 "is_configured": true, 00:26:21.918 "data_offset": 0, 00:26:21.918 "data_size": 65536 00:26:21.918 } 00:26:21.918 ] 00:26:21.918 }' 00:26:21.918 10:36:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.918 10:36:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.483 10:36:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:22.483 [2024-07-26 10:36:35.347461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:22.483 [2024-07-26 10:36:35.351299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23cc110 00:26:22.483 [2024-07-26 10:36:35.353319] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:22.483 10:36:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:23.858 "name": "raid_bdev1", 00:26:23.858 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:23.858 "strip_size_kb": 0, 00:26:23.858 "state": "online", 00:26:23.858 "raid_level": "raid1", 00:26:23.858 "superblock": false, 00:26:23.858 "num_base_bdevs": 4, 00:26:23.858 "num_base_bdevs_discovered": 4, 00:26:23.858 "num_base_bdevs_operational": 4, 00:26:23.858 "process": { 00:26:23.858 "type": "rebuild", 00:26:23.858 "target": "spare", 00:26:23.858 "progress": { 00:26:23.858 "blocks": 24576, 00:26:23.858 "percent": 37 00:26:23.858 } 00:26:23.858 }, 00:26:23.858 "base_bdevs_list": [ 00:26:23.858 { 00:26:23.858 "name": "spare", 00:26:23.858 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:23.858 "is_configured": true, 00:26:23.858 "data_offset": 0, 00:26:23.858 "data_size": 65536 00:26:23.858 }, 00:26:23.858 { 00:26:23.858 "name": "BaseBdev2", 00:26:23.858 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:23.858 "is_configured": true, 00:26:23.858 "data_offset": 0, 00:26:23.858 "data_size": 65536 00:26:23.858 }, 00:26:23.858 { 00:26:23.858 "name": "BaseBdev3", 00:26:23.858 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:23.858 "is_configured": true, 00:26:23.858 "data_offset": 0, 00:26:23.858 "data_size": 65536 00:26:23.858 }, 00:26:23.858 { 00:26:23.858 "name": "BaseBdev4", 00:26:23.858 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:23.858 "is_configured": true, 00:26:23.858 "data_offset": 0, 00:26:23.858 "data_size": 65536 00:26:23.858 } 00:26:23.858 ] 00:26:23.858 }' 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:23.858 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:24.116 [2024-07-26 10:36:36.906370] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:24.116 [2024-07-26 10:36:36.964950] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:24.116 [2024-07-26 10:36:36.964998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.116 [2024-07-26 10:36:36.965014] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:24.116 [2024-07-26 10:36:36.965022] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.116 10:36:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.375 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.375 "name": "raid_bdev1", 00:26:24.375 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:24.375 "strip_size_kb": 0, 00:26:24.375 "state": "online", 00:26:24.375 "raid_level": "raid1", 00:26:24.375 "superblock": false, 00:26:24.375 "num_base_bdevs": 4, 00:26:24.375 "num_base_bdevs_discovered": 3, 00:26:24.375 "num_base_bdevs_operational": 3, 00:26:24.375 "base_bdevs_list": [ 00:26:24.375 { 00:26:24.375 "name": null, 00:26:24.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.375 "is_configured": false, 00:26:24.375 "data_offset": 0, 00:26:24.375 "data_size": 65536 00:26:24.375 }, 00:26:24.375 { 00:26:24.375 "name": "BaseBdev2", 00:26:24.375 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:24.375 "is_configured": true, 00:26:24.375 "data_offset": 0, 00:26:24.375 "data_size": 65536 00:26:24.375 }, 00:26:24.375 { 00:26:24.375 "name": "BaseBdev3", 00:26:24.375 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:24.375 "is_configured": true, 00:26:24.375 "data_offset": 0, 00:26:24.375 "data_size": 65536 00:26:24.375 }, 00:26:24.375 { 00:26:24.375 "name": "BaseBdev4", 00:26:24.375 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:24.375 "is_configured": true, 00:26:24.375 "data_offset": 0, 00:26:24.375 "data_size": 65536 00:26:24.375 } 00:26:24.375 ] 00:26:24.375 }' 00:26:24.375 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.375 10:36:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.952 10:36:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.210 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:25.210 "name": "raid_bdev1", 00:26:25.210 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:25.210 "strip_size_kb": 0, 00:26:25.210 "state": "online", 00:26:25.210 "raid_level": "raid1", 00:26:25.210 "superblock": false, 00:26:25.210 "num_base_bdevs": 4, 00:26:25.210 "num_base_bdevs_discovered": 3, 00:26:25.210 "num_base_bdevs_operational": 3, 00:26:25.210 "base_bdevs_list": [ 00:26:25.210 { 00:26:25.210 "name": null, 00:26:25.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.210 "is_configured": false, 00:26:25.210 "data_offset": 0, 00:26:25.210 "data_size": 65536 00:26:25.210 }, 00:26:25.210 { 00:26:25.210 "name": "BaseBdev2", 00:26:25.210 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:25.210 "is_configured": true, 00:26:25.210 "data_offset": 0, 00:26:25.210 "data_size": 65536 00:26:25.210 }, 00:26:25.210 { 00:26:25.210 "name": "BaseBdev3", 00:26:25.210 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:25.210 "is_configured": true, 00:26:25.210 "data_offset": 0, 00:26:25.210 "data_size": 65536 00:26:25.210 }, 00:26:25.210 { 00:26:25.210 "name": "BaseBdev4", 00:26:25.210 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:25.210 "is_configured": true, 00:26:25.210 "data_offset": 0, 00:26:25.210 "data_size": 65536 00:26:25.210 } 00:26:25.210 ] 00:26:25.211 }' 00:26:25.211 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:25.211 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:25.211 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:25.469 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:25.469 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:25.469 [2024-07-26 10:36:38.331825] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:25.469 [2024-07-26 10:36:38.335586] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23cd1b0 00:26:25.469 [2024-07-26 10:36:38.336953] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:25.469 10:36:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:26.841 "name": "raid_bdev1", 00:26:26.841 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:26.841 "strip_size_kb": 0, 00:26:26.841 "state": "online", 00:26:26.841 "raid_level": "raid1", 00:26:26.841 "superblock": false, 00:26:26.841 "num_base_bdevs": 4, 00:26:26.841 "num_base_bdevs_discovered": 4, 00:26:26.841 "num_base_bdevs_operational": 4, 00:26:26.841 "process": { 00:26:26.841 "type": "rebuild", 00:26:26.841 "target": "spare", 00:26:26.841 "progress": { 00:26:26.841 "blocks": 24576, 00:26:26.841 "percent": 37 00:26:26.841 } 00:26:26.841 }, 00:26:26.841 "base_bdevs_list": [ 00:26:26.841 { 00:26:26.841 "name": "spare", 00:26:26.841 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:26.841 "is_configured": true, 00:26:26.841 "data_offset": 0, 00:26:26.841 "data_size": 65536 00:26:26.841 }, 00:26:26.841 { 00:26:26.841 "name": "BaseBdev2", 00:26:26.841 "uuid": "28b64221-5c2f-5fe5-a3c4-4b76792caf26", 00:26:26.841 "is_configured": true, 00:26:26.841 "data_offset": 0, 00:26:26.841 "data_size": 65536 00:26:26.841 }, 00:26:26.841 { 00:26:26.841 "name": "BaseBdev3", 00:26:26.841 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:26.841 "is_configured": true, 00:26:26.841 "data_offset": 0, 00:26:26.841 "data_size": 65536 00:26:26.841 }, 00:26:26.841 { 00:26:26.841 "name": "BaseBdev4", 00:26:26.841 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:26.841 "is_configured": true, 00:26:26.841 "data_offset": 0, 00:26:26.841 "data_size": 65536 00:26:26.841 } 00:26:26.841 ] 00:26:26.841 }' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:26:26.841 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:27.100 [2024-07-26 10:36:39.890446] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:27.100 [2024-07-26 10:36:39.948632] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x23cd1b0 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.100 10:36:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.358 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.358 "name": "raid_bdev1", 00:26:27.358 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:27.358 "strip_size_kb": 0, 00:26:27.358 "state": "online", 00:26:27.358 "raid_level": "raid1", 00:26:27.358 "superblock": false, 00:26:27.358 "num_base_bdevs": 4, 00:26:27.358 "num_base_bdevs_discovered": 3, 00:26:27.358 "num_base_bdevs_operational": 3, 00:26:27.358 "process": { 00:26:27.358 "type": "rebuild", 00:26:27.358 "target": "spare", 00:26:27.358 "progress": { 00:26:27.358 "blocks": 36864, 00:26:27.358 "percent": 56 00:26:27.358 } 00:26:27.358 }, 00:26:27.358 "base_bdevs_list": [ 00:26:27.358 { 00:26:27.358 "name": "spare", 00:26:27.358 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:27.358 "is_configured": true, 00:26:27.358 "data_offset": 0, 00:26:27.358 "data_size": 65536 00:26:27.358 }, 00:26:27.358 { 00:26:27.358 "name": null, 00:26:27.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.358 "is_configured": false, 00:26:27.358 "data_offset": 0, 00:26:27.358 "data_size": 65536 00:26:27.358 }, 00:26:27.358 { 00:26:27.358 "name": "BaseBdev3", 00:26:27.358 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:27.358 "is_configured": true, 00:26:27.358 "data_offset": 0, 00:26:27.358 "data_size": 65536 00:26:27.358 }, 00:26:27.358 { 00:26:27.358 "name": "BaseBdev4", 00:26:27.358 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:27.358 "is_configured": true, 00:26:27.358 "data_offset": 0, 00:26:27.358 "data_size": 65536 00:26:27.358 } 00:26:27.358 ] 00:26:27.358 }' 00:26:27.358 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.358 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.359 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=844 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.617 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:27.617 "name": "raid_bdev1", 00:26:27.617 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:27.617 "strip_size_kb": 0, 00:26:27.617 "state": "online", 00:26:27.617 "raid_level": "raid1", 00:26:27.617 "superblock": false, 00:26:27.617 "num_base_bdevs": 4, 00:26:27.617 "num_base_bdevs_discovered": 3, 00:26:27.617 "num_base_bdevs_operational": 3, 00:26:27.617 "process": { 00:26:27.617 "type": "rebuild", 00:26:27.617 "target": "spare", 00:26:27.617 "progress": { 00:26:27.617 "blocks": 43008, 00:26:27.617 "percent": 65 00:26:27.617 } 00:26:27.617 }, 00:26:27.617 "base_bdevs_list": [ 00:26:27.617 { 00:26:27.617 "name": "spare", 00:26:27.617 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:27.617 "is_configured": true, 00:26:27.617 "data_offset": 0, 00:26:27.617 "data_size": 65536 00:26:27.617 }, 00:26:27.617 { 00:26:27.617 "name": null, 00:26:27.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.617 "is_configured": false, 00:26:27.617 "data_offset": 0, 00:26:27.617 "data_size": 65536 00:26:27.617 }, 00:26:27.617 { 00:26:27.617 "name": "BaseBdev3", 00:26:27.617 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:27.617 "is_configured": true, 00:26:27.617 "data_offset": 0, 00:26:27.617 "data_size": 65536 00:26:27.617 }, 00:26:27.617 { 00:26:27.617 "name": "BaseBdev4", 00:26:27.617 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:27.617 "is_configured": true, 00:26:27.617 "data_offset": 0, 00:26:27.617 "data_size": 65536 00:26:27.617 } 00:26:27.617 ] 00:26:27.617 }' 00:26:27.875 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:27.875 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.875 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:27.875 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.875 10:36:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:28.811 [2024-07-26 10:36:41.560248] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:28.811 [2024-07-26 10:36:41.560301] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:28.811 [2024-07-26 10:36:41.560335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.811 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.069 "name": "raid_bdev1", 00:26:29.069 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:29.069 "strip_size_kb": 0, 00:26:29.069 "state": "online", 00:26:29.069 "raid_level": "raid1", 00:26:29.069 "superblock": false, 00:26:29.069 "num_base_bdevs": 4, 00:26:29.069 "num_base_bdevs_discovered": 3, 00:26:29.069 "num_base_bdevs_operational": 3, 00:26:29.069 "base_bdevs_list": [ 00:26:29.069 { 00:26:29.069 "name": "spare", 00:26:29.069 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:29.069 "is_configured": true, 00:26:29.069 "data_offset": 0, 00:26:29.069 "data_size": 65536 00:26:29.069 }, 00:26:29.069 { 00:26:29.069 "name": null, 00:26:29.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.069 "is_configured": false, 00:26:29.069 "data_offset": 0, 00:26:29.069 "data_size": 65536 00:26:29.069 }, 00:26:29.069 { 00:26:29.069 "name": "BaseBdev3", 00:26:29.069 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:29.069 "is_configured": true, 00:26:29.069 "data_offset": 0, 00:26:29.069 "data_size": 65536 00:26:29.069 }, 00:26:29.069 { 00:26:29.069 "name": "BaseBdev4", 00:26:29.069 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:29.069 "is_configured": true, 00:26:29.069 "data_offset": 0, 00:26:29.069 "data_size": 65536 00:26:29.069 } 00:26:29.069 ] 00:26:29.069 }' 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:29.069 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:29.070 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:29.070 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:29.070 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:29.070 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.070 10:36:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.328 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.328 "name": "raid_bdev1", 00:26:29.328 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:29.328 "strip_size_kb": 0, 00:26:29.328 "state": "online", 00:26:29.328 "raid_level": "raid1", 00:26:29.328 "superblock": false, 00:26:29.328 "num_base_bdevs": 4, 00:26:29.328 "num_base_bdevs_discovered": 3, 00:26:29.328 "num_base_bdevs_operational": 3, 00:26:29.328 "base_bdevs_list": [ 00:26:29.328 { 00:26:29.328 "name": "spare", 00:26:29.328 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:29.328 "is_configured": true, 00:26:29.328 "data_offset": 0, 00:26:29.328 "data_size": 65536 00:26:29.328 }, 00:26:29.328 { 00:26:29.328 "name": null, 00:26:29.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.328 "is_configured": false, 00:26:29.328 "data_offset": 0, 00:26:29.328 "data_size": 65536 00:26:29.328 }, 00:26:29.328 { 00:26:29.328 "name": "BaseBdev3", 00:26:29.328 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:29.328 "is_configured": true, 00:26:29.328 "data_offset": 0, 00:26:29.328 "data_size": 65536 00:26:29.328 }, 00:26:29.328 { 00:26:29.328 "name": "BaseBdev4", 00:26:29.328 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:29.328 "is_configured": true, 00:26:29.329 "data_offset": 0, 00:26:29.329 "data_size": 65536 00:26:29.329 } 00:26:29.329 ] 00:26:29.329 }' 00:26:29.329 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:29.329 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:29.329 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.587 "name": "raid_bdev1", 00:26:29.587 "uuid": "4fc30bf0-8cb2-43b1-9687-670eac6d0e30", 00:26:29.587 "strip_size_kb": 0, 00:26:29.587 "state": "online", 00:26:29.587 "raid_level": "raid1", 00:26:29.587 "superblock": false, 00:26:29.587 "num_base_bdevs": 4, 00:26:29.587 "num_base_bdevs_discovered": 3, 00:26:29.587 "num_base_bdevs_operational": 3, 00:26:29.587 "base_bdevs_list": [ 00:26:29.587 { 00:26:29.587 "name": "spare", 00:26:29.587 "uuid": "84f15521-72e7-5942-9450-f6c4160c6d05", 00:26:29.587 "is_configured": true, 00:26:29.587 "data_offset": 0, 00:26:29.587 "data_size": 65536 00:26:29.587 }, 00:26:29.587 { 00:26:29.587 "name": null, 00:26:29.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.587 "is_configured": false, 00:26:29.587 "data_offset": 0, 00:26:29.587 "data_size": 65536 00:26:29.587 }, 00:26:29.587 { 00:26:29.587 "name": "BaseBdev3", 00:26:29.587 "uuid": "27ee238f-0585-5c6f-9530-2ee458949768", 00:26:29.587 "is_configured": true, 00:26:29.587 "data_offset": 0, 00:26:29.587 "data_size": 65536 00:26:29.587 }, 00:26:29.587 { 00:26:29.587 "name": "BaseBdev4", 00:26:29.587 "uuid": "32894b21-a8af-5112-85a8-44c93109c5ff", 00:26:29.587 "is_configured": true, 00:26:29.587 "data_offset": 0, 00:26:29.587 "data_size": 65536 00:26:29.587 } 00:26:29.587 ] 00:26:29.587 }' 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.587 10:36:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.154 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:30.412 [2024-07-26 10:36:43.248844] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:30.412 [2024-07-26 10:36:43.248868] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:30.412 [2024-07-26 10:36:43.248921] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:30.412 [2024-07-26 10:36:43.248986] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:30.412 [2024-07-26 10:36:43.248997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23c8840 name raid_bdev1, state offline 00:26:30.412 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.412 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.671 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:30.929 /dev/nbd0 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.929 1+0 records in 00:26:30.929 1+0 records out 00:26:30.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176135 s, 23.3 MB/s 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:30.929 10:36:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:31.187 /dev/nbd1 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:31.187 1+0 records in 00:26:31.187 1+0 records out 00:26:31.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355957 s, 11.5 MB/s 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:31.187 10:36:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.445 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:31.703 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 3491743 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 3491743 ']' 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 3491743 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3491743 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3491743' 00:26:31.961 killing process with pid 3491743 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 3491743 00:26:31.961 Received shutdown signal, test time was about 60.000000 seconds 00:26:31.961 00:26:31.961 Latency(us) 00:26:31.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.961 =================================================================================================================== 00:26:31.961 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:31.961 [2024-07-26 10:36:44.672012] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:31.961 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 3491743 00:26:31.961 [2024-07-26 10:36:44.710089] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:26:32.220 00:26:32.220 real 0m22.095s 00:26:32.220 user 0m30.580s 00:26:32.220 sys 0m4.505s 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.220 ************************************ 00:26:32.220 END TEST raid_rebuild_test 00:26:32.220 ************************************ 00:26:32.220 10:36:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:26:32.220 10:36:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:32.220 10:36:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.220 10:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.220 ************************************ 00:26:32.220 START TEST raid_rebuild_test_sb 00:26:32.220 ************************************ 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=3495679 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 3495679 /var/tmp/spdk-raid.sock 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3495679 ']' 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:32.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.220 10:36:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.220 [2024-07-26 10:36:45.036208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:26:32.220 [2024-07-26 10:36:45.036264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495679 ] 00:26:32.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:32.220 Zero copy mechanism will not be used. 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.220 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:32.220 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:32.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.221 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:32.479 [2024-07-26 10:36:45.168222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.479 [2024-07-26 10:36:45.212916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.479 [2024-07-26 10:36:45.268790] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:32.479 [2024-07-26 10:36:45.268821] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:33.047 10:36:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:33.047 10:36:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:26:33.047 10:36:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:33.047 10:36:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:33.305 BaseBdev1_malloc 00:26:33.306 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:33.564 [2024-07-26 10:36:46.379063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:33.564 [2024-07-26 10:36:46.379108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.564 [2024-07-26 10:36:46.379133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x237f370 00:26:33.564 [2024-07-26 10:36:46.379151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.564 [2024-07-26 10:36:46.380581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.564 [2024-07-26 10:36:46.380607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:33.564 BaseBdev1 00:26:33.564 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:33.564 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:33.822 BaseBdev2_malloc 00:26:33.822 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:34.080 [2024-07-26 10:36:46.840735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:34.080 [2024-07-26 10:36:46.840776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.080 [2024-07-26 10:36:46.840793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x233b0d0 00:26:34.080 [2024-07-26 10:36:46.840805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.080 [2024-07-26 10:36:46.842244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.081 [2024-07-26 10:36:46.842269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:34.081 BaseBdev2 00:26:34.081 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:34.081 10:36:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:34.339 BaseBdev3_malloc 00:26:34.339 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:34.651 [2024-07-26 10:36:47.298361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:34.651 [2024-07-26 10:36:47.298404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.651 [2024-07-26 10:36:47.298422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23269f0 00:26:34.651 [2024-07-26 10:36:47.298434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.651 [2024-07-26 10:36:47.299843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.651 [2024-07-26 10:36:47.299869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:34.651 BaseBdev3 00:26:34.651 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:34.651 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:34.651 BaseBdev4_malloc 00:26:34.909 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:34.909 [2024-07-26 10:36:47.763899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:34.909 [2024-07-26 10:36:47.763941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.909 [2024-07-26 10:36:47.763962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2328270 00:26:34.909 [2024-07-26 10:36:47.763973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.909 [2024-07-26 10:36:47.765313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.909 [2024-07-26 10:36:47.765341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:34.909 BaseBdev4 00:26:34.909 10:36:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:35.166 spare_malloc 00:26:35.166 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:35.424 spare_delay 00:26:35.424 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:35.681 [2024-07-26 10:36:48.441820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:35.681 [2024-07-26 10:36:48.441862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.681 [2024-07-26 10:36:48.441882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232a700 00:26:35.681 [2024-07-26 10:36:48.441894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.681 [2024-07-26 10:36:48.443267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.681 [2024-07-26 10:36:48.443292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:35.681 spare 00:26:35.681 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:35.940 [2024-07-26 10:36:48.654405] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:35.940 [2024-07-26 10:36:48.655505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:35.940 [2024-07-26 10:36:48.655557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:35.940 [2024-07-26 10:36:48.655598] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:35.940 [2024-07-26 10:36:48.655751] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x232c840 00:26:35.940 [2024-07-26 10:36:48.655762] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:35.940 [2024-07-26 10:36:48.655941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2330140 00:26:35.940 [2024-07-26 10:36:48.656068] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x232c840 00:26:35.940 [2024-07-26 10:36:48.656077] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x232c840 00:26:35.940 [2024-07-26 10:36:48.656213] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.940 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.198 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.198 "name": "raid_bdev1", 00:26:36.198 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:36.198 "strip_size_kb": 0, 00:26:36.198 "state": "online", 00:26:36.198 "raid_level": "raid1", 00:26:36.198 "superblock": true, 00:26:36.198 "num_base_bdevs": 4, 00:26:36.198 "num_base_bdevs_discovered": 4, 00:26:36.198 "num_base_bdevs_operational": 4, 00:26:36.198 "base_bdevs_list": [ 00:26:36.198 { 00:26:36.198 "name": "BaseBdev1", 00:26:36.198 "uuid": "72871284-44cf-579d-b3ea-f25c41c35685", 00:26:36.198 "is_configured": true, 00:26:36.198 "data_offset": 2048, 00:26:36.198 "data_size": 63488 00:26:36.198 }, 00:26:36.198 { 00:26:36.198 "name": "BaseBdev2", 00:26:36.198 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:36.198 "is_configured": true, 00:26:36.198 "data_offset": 2048, 00:26:36.198 "data_size": 63488 00:26:36.198 }, 00:26:36.198 { 00:26:36.198 "name": "BaseBdev3", 00:26:36.198 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:36.198 "is_configured": true, 00:26:36.198 "data_offset": 2048, 00:26:36.198 "data_size": 63488 00:26:36.198 }, 00:26:36.198 { 00:26:36.198 "name": "BaseBdev4", 00:26:36.198 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:36.199 "is_configured": true, 00:26:36.199 "data_offset": 2048, 00:26:36.199 "data_size": 63488 00:26:36.199 } 00:26:36.199 ] 00:26:36.199 }' 00:26:36.199 10:36:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.199 10:36:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.765 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:36.765 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:37.023 [2024-07-26 10:36:49.681436] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.023 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:26:37.023 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.023 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.282 10:36:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:37.282 [2024-07-26 10:36:50.134402] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x233a490 00:26:37.282 /dev/nbd0 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.282 1+0 records in 00:26:37.282 1+0 records out 00:26:37.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239735 s, 17.1 MB/s 00:26:37.282 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:37.541 10:36:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:44.095 63488+0 records in 00:26:44.095 63488+0 records out 00:26:44.095 32505856 bytes (33 MB, 31 MiB) copied, 6.74117 s, 4.8 MB/s 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.095 10:36:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:44.354 [2024-07-26 10:36:57.184994] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.354 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:44.612 [2024-07-26 10:36:57.397564] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.612 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.871 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.871 "name": "raid_bdev1", 00:26:44.871 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:44.871 "strip_size_kb": 0, 00:26:44.871 "state": "online", 00:26:44.871 "raid_level": "raid1", 00:26:44.871 "superblock": true, 00:26:44.871 "num_base_bdevs": 4, 00:26:44.871 "num_base_bdevs_discovered": 3, 00:26:44.871 "num_base_bdevs_operational": 3, 00:26:44.871 "base_bdevs_list": [ 00:26:44.871 { 00:26:44.871 "name": null, 00:26:44.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.871 "is_configured": false, 00:26:44.871 "data_offset": 2048, 00:26:44.871 "data_size": 63488 00:26:44.871 }, 00:26:44.871 { 00:26:44.871 "name": "BaseBdev2", 00:26:44.871 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:44.871 "is_configured": true, 00:26:44.871 "data_offset": 2048, 00:26:44.871 "data_size": 63488 00:26:44.871 }, 00:26:44.871 { 00:26:44.871 "name": "BaseBdev3", 00:26:44.871 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:44.871 "is_configured": true, 00:26:44.871 "data_offset": 2048, 00:26:44.871 "data_size": 63488 00:26:44.871 }, 00:26:44.871 { 00:26:44.871 "name": "BaseBdev4", 00:26:44.871 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:44.871 "is_configured": true, 00:26:44.871 "data_offset": 2048, 00:26:44.871 "data_size": 63488 00:26:44.871 } 00:26:44.871 ] 00:26:44.871 }' 00:26:44.871 10:36:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.871 10:36:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.436 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:45.695 [2024-07-26 10:36:58.432305] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.695 [2024-07-26 10:36:58.436076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x233a490 00:26:45.695 [2024-07-26 10:36:58.438086] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:45.695 10:36:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.629 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.888 "name": "raid_bdev1", 00:26:46.888 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:46.888 "strip_size_kb": 0, 00:26:46.888 "state": "online", 00:26:46.888 "raid_level": "raid1", 00:26:46.888 "superblock": true, 00:26:46.888 "num_base_bdevs": 4, 00:26:46.888 "num_base_bdevs_discovered": 4, 00:26:46.888 "num_base_bdevs_operational": 4, 00:26:46.888 "process": { 00:26:46.888 "type": "rebuild", 00:26:46.888 "target": "spare", 00:26:46.888 "progress": { 00:26:46.888 "blocks": 24576, 00:26:46.888 "percent": 38 00:26:46.888 } 00:26:46.888 }, 00:26:46.888 "base_bdevs_list": [ 00:26:46.888 { 00:26:46.888 "name": "spare", 00:26:46.888 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:46.888 "is_configured": true, 00:26:46.888 "data_offset": 2048, 00:26:46.888 "data_size": 63488 00:26:46.888 }, 00:26:46.888 { 00:26:46.888 "name": "BaseBdev2", 00:26:46.888 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:46.888 "is_configured": true, 00:26:46.888 "data_offset": 2048, 00:26:46.888 "data_size": 63488 00:26:46.888 }, 00:26:46.888 { 00:26:46.888 "name": "BaseBdev3", 00:26:46.888 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:46.888 "is_configured": true, 00:26:46.888 "data_offset": 2048, 00:26:46.888 "data_size": 63488 00:26:46.888 }, 00:26:46.888 { 00:26:46.888 "name": "BaseBdev4", 00:26:46.888 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:46.888 "is_configured": true, 00:26:46.888 "data_offset": 2048, 00:26:46.888 "data_size": 63488 00:26:46.888 } 00:26:46.888 ] 00:26:46.888 }' 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.888 10:36:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:47.147 [2024-07-26 10:36:59.995072] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.405 [2024-07-26 10:37:00.049766] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:47.405 [2024-07-26 10:37:00.049813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.405 [2024-07-26 10:37:00.049829] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.405 [2024-07-26 10:37:00.049837] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.405 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.405 "name": "raid_bdev1", 00:26:47.405 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:47.405 "strip_size_kb": 0, 00:26:47.405 "state": "online", 00:26:47.405 "raid_level": "raid1", 00:26:47.405 "superblock": true, 00:26:47.405 "num_base_bdevs": 4, 00:26:47.405 "num_base_bdevs_discovered": 3, 00:26:47.405 "num_base_bdevs_operational": 3, 00:26:47.405 "base_bdevs_list": [ 00:26:47.405 { 00:26:47.405 "name": null, 00:26:47.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.406 "is_configured": false, 00:26:47.406 "data_offset": 2048, 00:26:47.406 "data_size": 63488 00:26:47.406 }, 00:26:47.406 { 00:26:47.406 "name": "BaseBdev2", 00:26:47.406 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:47.406 "is_configured": true, 00:26:47.406 "data_offset": 2048, 00:26:47.406 "data_size": 63488 00:26:47.406 }, 00:26:47.406 { 00:26:47.406 "name": "BaseBdev3", 00:26:47.406 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:47.406 "is_configured": true, 00:26:47.406 "data_offset": 2048, 00:26:47.406 "data_size": 63488 00:26:47.406 }, 00:26:47.406 { 00:26:47.406 "name": "BaseBdev4", 00:26:47.406 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:47.406 "is_configured": true, 00:26:47.406 "data_offset": 2048, 00:26:47.406 "data_size": 63488 00:26:47.406 } 00:26:47.406 ] 00:26:47.406 }' 00:26:47.406 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.406 10:37:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.972 10:37:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.230 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.230 "name": "raid_bdev1", 00:26:48.230 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:48.230 "strip_size_kb": 0, 00:26:48.230 "state": "online", 00:26:48.230 "raid_level": "raid1", 00:26:48.230 "superblock": true, 00:26:48.230 "num_base_bdevs": 4, 00:26:48.230 "num_base_bdevs_discovered": 3, 00:26:48.230 "num_base_bdevs_operational": 3, 00:26:48.230 "base_bdevs_list": [ 00:26:48.230 { 00:26:48.230 "name": null, 00:26:48.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.230 "is_configured": false, 00:26:48.230 "data_offset": 2048, 00:26:48.230 "data_size": 63488 00:26:48.230 }, 00:26:48.230 { 00:26:48.230 "name": "BaseBdev2", 00:26:48.230 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:48.230 "is_configured": true, 00:26:48.230 "data_offset": 2048, 00:26:48.230 "data_size": 63488 00:26:48.230 }, 00:26:48.230 { 00:26:48.230 "name": "BaseBdev3", 00:26:48.230 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:48.230 "is_configured": true, 00:26:48.230 "data_offset": 2048, 00:26:48.230 "data_size": 63488 00:26:48.230 }, 00:26:48.230 { 00:26:48.230 "name": "BaseBdev4", 00:26:48.230 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:48.230 "is_configured": true, 00:26:48.230 "data_offset": 2048, 00:26:48.230 "data_size": 63488 00:26:48.230 } 00:26:48.230 ] 00:26:48.230 }' 00:26:48.230 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.230 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:48.230 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.488 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:48.488 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:48.488 [2024-07-26 10:37:01.372823] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.488 [2024-07-26 10:37:01.376601] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x233a490 00:26:48.488 [2024-07-26 10:37:01.377982] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:48.746 10:37:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.681 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.939 "name": "raid_bdev1", 00:26:49.939 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:49.939 "strip_size_kb": 0, 00:26:49.939 "state": "online", 00:26:49.939 "raid_level": "raid1", 00:26:49.939 "superblock": true, 00:26:49.939 "num_base_bdevs": 4, 00:26:49.939 "num_base_bdevs_discovered": 4, 00:26:49.939 "num_base_bdevs_operational": 4, 00:26:49.939 "process": { 00:26:49.939 "type": "rebuild", 00:26:49.939 "target": "spare", 00:26:49.939 "progress": { 00:26:49.939 "blocks": 24576, 00:26:49.939 "percent": 38 00:26:49.939 } 00:26:49.939 }, 00:26:49.939 "base_bdevs_list": [ 00:26:49.939 { 00:26:49.939 "name": "spare", 00:26:49.939 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:49.939 "is_configured": true, 00:26:49.939 "data_offset": 2048, 00:26:49.939 "data_size": 63488 00:26:49.939 }, 00:26:49.939 { 00:26:49.939 "name": "BaseBdev2", 00:26:49.939 "uuid": "0444fcab-b0c2-5b97-8d8f-4abb840baa4d", 00:26:49.939 "is_configured": true, 00:26:49.939 "data_offset": 2048, 00:26:49.939 "data_size": 63488 00:26:49.939 }, 00:26:49.939 { 00:26:49.939 "name": "BaseBdev3", 00:26:49.939 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:49.939 "is_configured": true, 00:26:49.939 "data_offset": 2048, 00:26:49.939 "data_size": 63488 00:26:49.939 }, 00:26:49.939 { 00:26:49.939 "name": "BaseBdev4", 00:26:49.939 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:49.939 "is_configured": true, 00:26:49.939 "data_offset": 2048, 00:26:49.939 "data_size": 63488 00:26:49.939 } 00:26:49.939 ] 00:26:49.939 }' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:49.939 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:26:49.939 10:37:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:50.196 [2024-07-26 10:37:02.931018] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:50.196 [2024-07-26 10:37:03.090018] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x233a490 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.454 "name": "raid_bdev1", 00:26:50.454 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:50.454 "strip_size_kb": 0, 00:26:50.454 "state": "online", 00:26:50.454 "raid_level": "raid1", 00:26:50.454 "superblock": true, 00:26:50.454 "num_base_bdevs": 4, 00:26:50.454 "num_base_bdevs_discovered": 3, 00:26:50.454 "num_base_bdevs_operational": 3, 00:26:50.454 "process": { 00:26:50.454 "type": "rebuild", 00:26:50.454 "target": "spare", 00:26:50.454 "progress": { 00:26:50.454 "blocks": 36864, 00:26:50.454 "percent": 58 00:26:50.454 } 00:26:50.454 }, 00:26:50.454 "base_bdevs_list": [ 00:26:50.454 { 00:26:50.454 "name": "spare", 00:26:50.454 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:50.454 "is_configured": true, 00:26:50.454 "data_offset": 2048, 00:26:50.454 "data_size": 63488 00:26:50.454 }, 00:26:50.454 { 00:26:50.454 "name": null, 00:26:50.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.454 "is_configured": false, 00:26:50.454 "data_offset": 2048, 00:26:50.454 "data_size": 63488 00:26:50.454 }, 00:26:50.454 { 00:26:50.454 "name": "BaseBdev3", 00:26:50.454 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:50.454 "is_configured": true, 00:26:50.454 "data_offset": 2048, 00:26:50.454 "data_size": 63488 00:26:50.454 }, 00:26:50.454 { 00:26:50.454 "name": "BaseBdev4", 00:26:50.454 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:50.454 "is_configured": true, 00:26:50.454 "data_offset": 2048, 00:26:50.454 "data_size": 63488 00:26:50.454 } 00:26:50.454 ] 00:26:50.454 }' 00:26:50.454 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=867 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.711 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.969 "name": "raid_bdev1", 00:26:50.969 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:50.969 "strip_size_kb": 0, 00:26:50.969 "state": "online", 00:26:50.969 "raid_level": "raid1", 00:26:50.969 "superblock": true, 00:26:50.969 "num_base_bdevs": 4, 00:26:50.969 "num_base_bdevs_discovered": 3, 00:26:50.969 "num_base_bdevs_operational": 3, 00:26:50.969 "process": { 00:26:50.969 "type": "rebuild", 00:26:50.969 "target": "spare", 00:26:50.969 "progress": { 00:26:50.969 "blocks": 43008, 00:26:50.969 "percent": 67 00:26:50.969 } 00:26:50.969 }, 00:26:50.969 "base_bdevs_list": [ 00:26:50.969 { 00:26:50.969 "name": "spare", 00:26:50.969 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:50.969 "is_configured": true, 00:26:50.969 "data_offset": 2048, 00:26:50.969 "data_size": 63488 00:26:50.969 }, 00:26:50.969 { 00:26:50.969 "name": null, 00:26:50.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.969 "is_configured": false, 00:26:50.969 "data_offset": 2048, 00:26:50.969 "data_size": 63488 00:26:50.969 }, 00:26:50.969 { 00:26:50.969 "name": "BaseBdev3", 00:26:50.969 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:50.969 "is_configured": true, 00:26:50.969 "data_offset": 2048, 00:26:50.969 "data_size": 63488 00:26:50.969 }, 00:26:50.969 { 00:26:50.969 "name": "BaseBdev4", 00:26:50.969 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:50.969 "is_configured": true, 00:26:50.969 "data_offset": 2048, 00:26:50.969 "data_size": 63488 00:26:50.969 } 00:26:50.969 ] 00:26:50.969 }' 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.969 10:37:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:51.902 [2024-07-26 10:37:04.600918] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:51.902 [2024-07-26 10:37:04.600969] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:51.902 [2024-07-26 10:37:04.601058] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.902 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.160 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.160 "name": "raid_bdev1", 00:26:52.160 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:52.160 "strip_size_kb": 0, 00:26:52.160 "state": "online", 00:26:52.160 "raid_level": "raid1", 00:26:52.160 "superblock": true, 00:26:52.160 "num_base_bdevs": 4, 00:26:52.160 "num_base_bdevs_discovered": 3, 00:26:52.160 "num_base_bdevs_operational": 3, 00:26:52.160 "base_bdevs_list": [ 00:26:52.160 { 00:26:52.160 "name": "spare", 00:26:52.160 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:52.160 "is_configured": true, 00:26:52.160 "data_offset": 2048, 00:26:52.160 "data_size": 63488 00:26:52.160 }, 00:26:52.160 { 00:26:52.160 "name": null, 00:26:52.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.160 "is_configured": false, 00:26:52.160 "data_offset": 2048, 00:26:52.160 "data_size": 63488 00:26:52.160 }, 00:26:52.160 { 00:26:52.160 "name": "BaseBdev3", 00:26:52.160 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:52.160 "is_configured": true, 00:26:52.160 "data_offset": 2048, 00:26:52.160 "data_size": 63488 00:26:52.160 }, 00:26:52.160 { 00:26:52.160 "name": "BaseBdev4", 00:26:52.160 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:52.160 "is_configured": true, 00:26:52.160 "data_offset": 2048, 00:26:52.160 "data_size": 63488 00:26:52.160 } 00:26:52.160 ] 00:26:52.160 }' 00:26:52.160 10:37:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.160 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:52.160 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.419 "name": "raid_bdev1", 00:26:52.419 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:52.419 "strip_size_kb": 0, 00:26:52.419 "state": "online", 00:26:52.419 "raid_level": "raid1", 00:26:52.419 "superblock": true, 00:26:52.419 "num_base_bdevs": 4, 00:26:52.419 "num_base_bdevs_discovered": 3, 00:26:52.419 "num_base_bdevs_operational": 3, 00:26:52.419 "base_bdevs_list": [ 00:26:52.419 { 00:26:52.419 "name": "spare", 00:26:52.419 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:52.419 "is_configured": true, 00:26:52.419 "data_offset": 2048, 00:26:52.419 "data_size": 63488 00:26:52.419 }, 00:26:52.419 { 00:26:52.419 "name": null, 00:26:52.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.419 "is_configured": false, 00:26:52.419 "data_offset": 2048, 00:26:52.419 "data_size": 63488 00:26:52.419 }, 00:26:52.419 { 00:26:52.419 "name": "BaseBdev3", 00:26:52.419 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:52.419 "is_configured": true, 00:26:52.419 "data_offset": 2048, 00:26:52.419 "data_size": 63488 00:26:52.419 }, 00:26:52.419 { 00:26:52.419 "name": "BaseBdev4", 00:26:52.419 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:52.419 "is_configured": true, 00:26:52.419 "data_offset": 2048, 00:26:52.419 "data_size": 63488 00:26:52.419 } 00:26:52.419 ] 00:26:52.419 }' 00:26:52.419 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.677 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:52.677 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.678 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.936 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.936 "name": "raid_bdev1", 00:26:52.936 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:52.936 "strip_size_kb": 0, 00:26:52.936 "state": "online", 00:26:52.936 "raid_level": "raid1", 00:26:52.936 "superblock": true, 00:26:52.936 "num_base_bdevs": 4, 00:26:52.936 "num_base_bdevs_discovered": 3, 00:26:52.936 "num_base_bdevs_operational": 3, 00:26:52.936 "base_bdevs_list": [ 00:26:52.936 { 00:26:52.936 "name": "spare", 00:26:52.936 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:52.936 "is_configured": true, 00:26:52.936 "data_offset": 2048, 00:26:52.936 "data_size": 63488 00:26:52.936 }, 00:26:52.936 { 00:26:52.936 "name": null, 00:26:52.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.936 "is_configured": false, 00:26:52.936 "data_offset": 2048, 00:26:52.936 "data_size": 63488 00:26:52.936 }, 00:26:52.936 { 00:26:52.936 "name": "BaseBdev3", 00:26:52.936 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:52.936 "is_configured": true, 00:26:52.936 "data_offset": 2048, 00:26:52.936 "data_size": 63488 00:26:52.936 }, 00:26:52.936 { 00:26:52.936 "name": "BaseBdev4", 00:26:52.936 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:52.936 "is_configured": true, 00:26:52.936 "data_offset": 2048, 00:26:52.936 "data_size": 63488 00:26:52.936 } 00:26:52.936 ] 00:26:52.936 }' 00:26:52.936 10:37:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.936 10:37:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.503 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:53.503 [2024-07-26 10:37:06.382028] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:53.503 [2024-07-26 10:37:06.382054] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:53.503 [2024-07-26 10:37:06.382107] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:53.503 [2024-07-26 10:37:06.382183] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:53.503 [2024-07-26 10:37:06.382194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232c840 name raid_bdev1, state offline 00:26:53.503 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.503 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:26:53.761 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:53.762 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:54.020 /dev/nbd0 00:26:54.020 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:54.020 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:54.020 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:54.020 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:54.020 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.021 1+0 records in 00:26:54.021 1+0 records out 00:26:54.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233026 s, 17.6 MB/s 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.021 10:37:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:54.280 /dev/nbd1 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:54.280 1+0 records in 00:26:54.280 1+0 records out 00:26:54.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292917 s, 14.0 MB/s 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:26:54.280 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.538 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.797 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:55.055 10:37:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:55.314 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:55.573 [2024-07-26 10:37:08.236053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:55.573 [2024-07-26 10:37:08.236091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.573 [2024-07-26 10:37:08.236109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232d210 00:26:55.573 [2024-07-26 10:37:08.236121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.573 [2024-07-26 10:37:08.237671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.573 [2024-07-26 10:37:08.237697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:55.573 [2024-07-26 10:37:08.237760] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:55.573 [2024-07-26 10:37:08.237783] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:55.573 [2024-07-26 10:37:08.237873] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:55.573 [2024-07-26 10:37:08.237939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:55.573 spare 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.573 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.573 [2024-07-26 10:37:08.338248] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x23308b0 00:26:55.573 [2024-07-26 10:37:08.338264] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:55.573 [2024-07-26 10:37:08.338436] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21ce980 00:26:55.573 [2024-07-26 10:37:08.338571] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23308b0 00:26:55.573 [2024-07-26 10:37:08.338581] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x23308b0 00:26:55.573 [2024-07-26 10:37:08.338673] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.832 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.832 "name": "raid_bdev1", 00:26:55.832 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:55.832 "strip_size_kb": 0, 00:26:55.832 "state": "online", 00:26:55.832 "raid_level": "raid1", 00:26:55.832 "superblock": true, 00:26:55.832 "num_base_bdevs": 4, 00:26:55.832 "num_base_bdevs_discovered": 3, 00:26:55.832 "num_base_bdevs_operational": 3, 00:26:55.832 "base_bdevs_list": [ 00:26:55.832 { 00:26:55.832 "name": "spare", 00:26:55.832 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:55.833 "is_configured": true, 00:26:55.833 "data_offset": 2048, 00:26:55.833 "data_size": 63488 00:26:55.833 }, 00:26:55.833 { 00:26:55.833 "name": null, 00:26:55.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.833 "is_configured": false, 00:26:55.833 "data_offset": 2048, 00:26:55.833 "data_size": 63488 00:26:55.833 }, 00:26:55.833 { 00:26:55.833 "name": "BaseBdev3", 00:26:55.833 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:55.833 "is_configured": true, 00:26:55.833 "data_offset": 2048, 00:26:55.833 "data_size": 63488 00:26:55.833 }, 00:26:55.833 { 00:26:55.833 "name": "BaseBdev4", 00:26:55.833 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:55.833 "is_configured": true, 00:26:55.833 "data_offset": 2048, 00:26:55.833 "data_size": 63488 00:26:55.833 } 00:26:55.833 ] 00:26:55.833 }' 00:26:55.833 10:37:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.833 10:37:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.411 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:56.411 "name": "raid_bdev1", 00:26:56.411 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:56.411 "strip_size_kb": 0, 00:26:56.411 "state": "online", 00:26:56.411 "raid_level": "raid1", 00:26:56.411 "superblock": true, 00:26:56.411 "num_base_bdevs": 4, 00:26:56.411 "num_base_bdevs_discovered": 3, 00:26:56.411 "num_base_bdevs_operational": 3, 00:26:56.411 "base_bdevs_list": [ 00:26:56.411 { 00:26:56.411 "name": "spare", 00:26:56.411 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:56.411 "is_configured": true, 00:26:56.411 "data_offset": 2048, 00:26:56.411 "data_size": 63488 00:26:56.411 }, 00:26:56.411 { 00:26:56.411 "name": null, 00:26:56.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.411 "is_configured": false, 00:26:56.411 "data_offset": 2048, 00:26:56.411 "data_size": 63488 00:26:56.411 }, 00:26:56.411 { 00:26:56.411 "name": "BaseBdev3", 00:26:56.411 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:56.411 "is_configured": true, 00:26:56.411 "data_offset": 2048, 00:26:56.411 "data_size": 63488 00:26:56.411 }, 00:26:56.412 { 00:26:56.412 "name": "BaseBdev4", 00:26:56.412 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:56.412 "is_configured": true, 00:26:56.412 "data_offset": 2048, 00:26:56.412 "data_size": 63488 00:26:56.412 } 00:26:56.412 ] 00:26:56.412 }' 00:26:56.412 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:56.412 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:56.412 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:56.708 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:56.708 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:56.708 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.708 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.708 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:56.978 [2024-07-26 10:37:09.704196] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.978 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.236 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.236 "name": "raid_bdev1", 00:26:57.236 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:57.236 "strip_size_kb": 0, 00:26:57.236 "state": "online", 00:26:57.236 "raid_level": "raid1", 00:26:57.236 "superblock": true, 00:26:57.236 "num_base_bdevs": 4, 00:26:57.236 "num_base_bdevs_discovered": 2, 00:26:57.236 "num_base_bdevs_operational": 2, 00:26:57.236 "base_bdevs_list": [ 00:26:57.236 { 00:26:57.236 "name": null, 00:26:57.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.237 "is_configured": false, 00:26:57.237 "data_offset": 2048, 00:26:57.237 "data_size": 63488 00:26:57.237 }, 00:26:57.237 { 00:26:57.237 "name": null, 00:26:57.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.237 "is_configured": false, 00:26:57.237 "data_offset": 2048, 00:26:57.237 "data_size": 63488 00:26:57.237 }, 00:26:57.237 { 00:26:57.237 "name": "BaseBdev3", 00:26:57.237 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:57.237 "is_configured": true, 00:26:57.237 "data_offset": 2048, 00:26:57.237 "data_size": 63488 00:26:57.237 }, 00:26:57.237 { 00:26:57.237 "name": "BaseBdev4", 00:26:57.237 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:57.237 "is_configured": true, 00:26:57.237 "data_offset": 2048, 00:26:57.237 "data_size": 63488 00:26:57.237 } 00:26:57.237 ] 00:26:57.237 }' 00:26:57.237 10:37:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.237 10:37:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.802 10:37:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:58.368 [2024-07-26 10:37:11.015764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:58.368 [2024-07-26 10:37:11.015896] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:26:58.368 [2024-07-26 10:37:11.015911] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:58.368 [2024-07-26 10:37:11.015939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:58.368 [2024-07-26 10:37:11.019595] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232de00 00:26:58.368 [2024-07-26 10:37:11.021579] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:58.368 10:37:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.301 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:59.559 "name": "raid_bdev1", 00:26:59.559 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:26:59.559 "strip_size_kb": 0, 00:26:59.559 "state": "online", 00:26:59.559 "raid_level": "raid1", 00:26:59.559 "superblock": true, 00:26:59.559 "num_base_bdevs": 4, 00:26:59.559 "num_base_bdevs_discovered": 3, 00:26:59.559 "num_base_bdevs_operational": 3, 00:26:59.559 "process": { 00:26:59.559 "type": "rebuild", 00:26:59.559 "target": "spare", 00:26:59.559 "progress": { 00:26:59.559 "blocks": 24576, 00:26:59.559 "percent": 38 00:26:59.559 } 00:26:59.559 }, 00:26:59.559 "base_bdevs_list": [ 00:26:59.559 { 00:26:59.559 "name": "spare", 00:26:59.559 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:26:59.559 "is_configured": true, 00:26:59.559 "data_offset": 2048, 00:26:59.559 "data_size": 63488 00:26:59.559 }, 00:26:59.559 { 00:26:59.559 "name": null, 00:26:59.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.559 "is_configured": false, 00:26:59.559 "data_offset": 2048, 00:26:59.559 "data_size": 63488 00:26:59.559 }, 00:26:59.559 { 00:26:59.559 "name": "BaseBdev3", 00:26:59.559 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:26:59.559 "is_configured": true, 00:26:59.559 "data_offset": 2048, 00:26:59.559 "data_size": 63488 00:26:59.559 }, 00:26:59.559 { 00:26:59.559 "name": "BaseBdev4", 00:26:59.559 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:26:59.559 "is_configured": true, 00:26:59.559 "data_offset": 2048, 00:26:59.559 "data_size": 63488 00:26:59.559 } 00:26:59.559 ] 00:26:59.559 }' 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.559 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:59.817 [2024-07-26 10:37:12.583068] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:59.817 [2024-07-26 10:37:12.633299] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:59.817 [2024-07-26 10:37:12.633341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.817 [2024-07-26 10:37:12.633356] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:59.817 [2024-07-26 10:37:12.633364] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.817 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.075 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.075 "name": "raid_bdev1", 00:27:00.075 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:00.075 "strip_size_kb": 0, 00:27:00.075 "state": "online", 00:27:00.075 "raid_level": "raid1", 00:27:00.075 "superblock": true, 00:27:00.075 "num_base_bdevs": 4, 00:27:00.075 "num_base_bdevs_discovered": 2, 00:27:00.075 "num_base_bdevs_operational": 2, 00:27:00.075 "base_bdevs_list": [ 00:27:00.075 { 00:27:00.075 "name": null, 00:27:00.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.075 "is_configured": false, 00:27:00.075 "data_offset": 2048, 00:27:00.075 "data_size": 63488 00:27:00.075 }, 00:27:00.075 { 00:27:00.075 "name": null, 00:27:00.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.075 "is_configured": false, 00:27:00.075 "data_offset": 2048, 00:27:00.075 "data_size": 63488 00:27:00.075 }, 00:27:00.075 { 00:27:00.075 "name": "BaseBdev3", 00:27:00.075 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:00.075 "is_configured": true, 00:27:00.075 "data_offset": 2048, 00:27:00.075 "data_size": 63488 00:27:00.075 }, 00:27:00.075 { 00:27:00.075 "name": "BaseBdev4", 00:27:00.075 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:00.075 "is_configured": true, 00:27:00.075 "data_offset": 2048, 00:27:00.075 "data_size": 63488 00:27:00.075 } 00:27:00.075 ] 00:27:00.075 }' 00:27:00.075 10:37:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.075 10:37:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.641 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:00.898 [2024-07-26 10:37:13.659766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:00.898 [2024-07-26 10:37:13.659810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.898 [2024-07-26 10:37:13.659830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x232ff90 00:27:00.898 [2024-07-26 10:37:13.659841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.898 [2024-07-26 10:37:13.660183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.898 [2024-07-26 10:37:13.660200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:00.898 [2024-07-26 10:37:13.660271] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:00.898 [2024-07-26 10:37:13.660282] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:00.898 [2024-07-26 10:37:13.660291] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:00.898 [2024-07-26 10:37:13.660310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:00.898 [2024-07-26 10:37:13.664007] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2381100 00:27:00.898 spare 00:27:00.898 [2024-07-26 10:37:13.665360] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:00.898 10:37:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.828 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.084 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:02.085 "name": "raid_bdev1", 00:27:02.085 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:02.085 "strip_size_kb": 0, 00:27:02.085 "state": "online", 00:27:02.085 "raid_level": "raid1", 00:27:02.085 "superblock": true, 00:27:02.085 "num_base_bdevs": 4, 00:27:02.085 "num_base_bdevs_discovered": 3, 00:27:02.085 "num_base_bdevs_operational": 3, 00:27:02.085 "process": { 00:27:02.085 "type": "rebuild", 00:27:02.085 "target": "spare", 00:27:02.085 "progress": { 00:27:02.085 "blocks": 24576, 00:27:02.085 "percent": 38 00:27:02.085 } 00:27:02.085 }, 00:27:02.085 "base_bdevs_list": [ 00:27:02.085 { 00:27:02.085 "name": "spare", 00:27:02.085 "uuid": "4591418e-ef64-5d2e-8321-aaf719abb40e", 00:27:02.085 "is_configured": true, 00:27:02.085 "data_offset": 2048, 00:27:02.085 "data_size": 63488 00:27:02.085 }, 00:27:02.085 { 00:27:02.085 "name": null, 00:27:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.085 "is_configured": false, 00:27:02.085 "data_offset": 2048, 00:27:02.085 "data_size": 63488 00:27:02.085 }, 00:27:02.085 { 00:27:02.085 "name": "BaseBdev3", 00:27:02.085 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:02.085 "is_configured": true, 00:27:02.085 "data_offset": 2048, 00:27:02.085 "data_size": 63488 00:27:02.085 }, 00:27:02.085 { 00:27:02.085 "name": "BaseBdev4", 00:27:02.085 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:02.085 "is_configured": true, 00:27:02.085 "data_offset": 2048, 00:27:02.085 "data_size": 63488 00:27:02.085 } 00:27:02.085 ] 00:27:02.085 }' 00:27:02.085 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:02.085 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:02.085 10:37:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:02.342 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.342 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:02.342 [2024-07-26 10:37:15.224935] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:02.600 [2024-07-26 10:37:15.277032] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:02.600 [2024-07-26 10:37:15.277072] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.600 [2024-07-26 10:37:15.277086] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:02.600 [2024-07-26 10:37:15.277093] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.600 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.857 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.857 "name": "raid_bdev1", 00:27:02.857 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:02.857 "strip_size_kb": 0, 00:27:02.857 "state": "online", 00:27:02.857 "raid_level": "raid1", 00:27:02.857 "superblock": true, 00:27:02.857 "num_base_bdevs": 4, 00:27:02.857 "num_base_bdevs_discovered": 2, 00:27:02.857 "num_base_bdevs_operational": 2, 00:27:02.857 "base_bdevs_list": [ 00:27:02.857 { 00:27:02.857 "name": null, 00:27:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.857 "is_configured": false, 00:27:02.857 "data_offset": 2048, 00:27:02.857 "data_size": 63488 00:27:02.857 }, 00:27:02.857 { 00:27:02.857 "name": null, 00:27:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.857 "is_configured": false, 00:27:02.857 "data_offset": 2048, 00:27:02.857 "data_size": 63488 00:27:02.857 }, 00:27:02.857 { 00:27:02.857 "name": "BaseBdev3", 00:27:02.857 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:02.857 "is_configured": true, 00:27:02.857 "data_offset": 2048, 00:27:02.857 "data_size": 63488 00:27:02.857 }, 00:27:02.857 { 00:27:02.857 "name": "BaseBdev4", 00:27:02.857 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:02.857 "is_configured": true, 00:27:02.857 "data_offset": 2048, 00:27:02.857 "data_size": 63488 00:27:02.857 } 00:27:02.857 ] 00:27:02.857 }' 00:27:02.857 10:37:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.857 10:37:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.423 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.680 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.680 "name": "raid_bdev1", 00:27:03.680 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:03.680 "strip_size_kb": 0, 00:27:03.680 "state": "online", 00:27:03.680 "raid_level": "raid1", 00:27:03.680 "superblock": true, 00:27:03.680 "num_base_bdevs": 4, 00:27:03.680 "num_base_bdevs_discovered": 2, 00:27:03.680 "num_base_bdevs_operational": 2, 00:27:03.680 "base_bdevs_list": [ 00:27:03.680 { 00:27:03.680 "name": null, 00:27:03.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.680 "is_configured": false, 00:27:03.680 "data_offset": 2048, 00:27:03.680 "data_size": 63488 00:27:03.680 }, 00:27:03.680 { 00:27:03.680 "name": null, 00:27:03.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.681 "is_configured": false, 00:27:03.681 "data_offset": 2048, 00:27:03.681 "data_size": 63488 00:27:03.681 }, 00:27:03.681 { 00:27:03.681 "name": "BaseBdev3", 00:27:03.681 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:03.681 "is_configured": true, 00:27:03.681 "data_offset": 2048, 00:27:03.681 "data_size": 63488 00:27:03.681 }, 00:27:03.681 { 00:27:03.681 "name": "BaseBdev4", 00:27:03.681 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:03.681 "is_configured": true, 00:27:03.681 "data_offset": 2048, 00:27:03.681 "data_size": 63488 00:27:03.681 } 00:27:03.681 ] 00:27:03.681 }' 00:27:03.681 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:03.681 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:03.681 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:03.681 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:03.681 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:03.939 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:04.197 [2024-07-26 10:37:16.877359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:04.197 [2024-07-26 10:37:16.877402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.197 [2024-07-26 10:37:16.877421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x237f6b0 00:27:04.197 [2024-07-26 10:37:16.877433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.197 [2024-07-26 10:37:16.877741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.197 [2024-07-26 10:37:16.877757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:04.197 [2024-07-26 10:37:16.877813] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:04.197 [2024-07-26 10:37:16.877825] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:04.197 [2024-07-26 10:37:16.877835] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:04.197 BaseBdev1 00:27:04.197 10:37:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.131 10:37:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.390 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.390 "name": "raid_bdev1", 00:27:05.390 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:05.390 "strip_size_kb": 0, 00:27:05.390 "state": "online", 00:27:05.390 "raid_level": "raid1", 00:27:05.390 "superblock": true, 00:27:05.390 "num_base_bdevs": 4, 00:27:05.390 "num_base_bdevs_discovered": 2, 00:27:05.390 "num_base_bdevs_operational": 2, 00:27:05.390 "base_bdevs_list": [ 00:27:05.390 { 00:27:05.390 "name": null, 00:27:05.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.390 "is_configured": false, 00:27:05.390 "data_offset": 2048, 00:27:05.390 "data_size": 63488 00:27:05.390 }, 00:27:05.390 { 00:27:05.390 "name": null, 00:27:05.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.390 "is_configured": false, 00:27:05.390 "data_offset": 2048, 00:27:05.390 "data_size": 63488 00:27:05.390 }, 00:27:05.390 { 00:27:05.390 "name": "BaseBdev3", 00:27:05.390 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:05.390 "is_configured": true, 00:27:05.390 "data_offset": 2048, 00:27:05.390 "data_size": 63488 00:27:05.390 }, 00:27:05.390 { 00:27:05.390 "name": "BaseBdev4", 00:27:05.390 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:05.390 "is_configured": true, 00:27:05.390 "data_offset": 2048, 00:27:05.390 "data_size": 63488 00:27:05.390 } 00:27:05.390 ] 00:27:05.390 }' 00:27:05.390 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.390 10:37:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.956 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.214 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.214 "name": "raid_bdev1", 00:27:06.214 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:06.214 "strip_size_kb": 0, 00:27:06.214 "state": "online", 00:27:06.214 "raid_level": "raid1", 00:27:06.215 "superblock": true, 00:27:06.215 "num_base_bdevs": 4, 00:27:06.215 "num_base_bdevs_discovered": 2, 00:27:06.215 "num_base_bdevs_operational": 2, 00:27:06.215 "base_bdevs_list": [ 00:27:06.215 { 00:27:06.215 "name": null, 00:27:06.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.215 "is_configured": false, 00:27:06.215 "data_offset": 2048, 00:27:06.215 "data_size": 63488 00:27:06.215 }, 00:27:06.215 { 00:27:06.215 "name": null, 00:27:06.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.215 "is_configured": false, 00:27:06.215 "data_offset": 2048, 00:27:06.215 "data_size": 63488 00:27:06.215 }, 00:27:06.215 { 00:27:06.215 "name": "BaseBdev3", 00:27:06.215 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:06.215 "is_configured": true, 00:27:06.215 "data_offset": 2048, 00:27:06.215 "data_size": 63488 00:27:06.215 }, 00:27:06.215 { 00:27:06.215 "name": "BaseBdev4", 00:27:06.215 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:06.215 "is_configured": true, 00:27:06.215 "data_offset": 2048, 00:27:06.215 "data_size": 63488 00:27:06.215 } 00:27:06.215 ] 00:27:06.215 }' 00:27:06.215 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:06.215 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:06.215 10:37:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:06.215 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:06.473 [2024-07-26 10:37:19.231588] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:06.473 [2024-07-26 10:37:19.231701] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:06.473 [2024-07-26 10:37:19.231715] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:06.473 request: 00:27:06.473 { 00:27:06.473 "base_bdev": "BaseBdev1", 00:27:06.473 "raid_bdev": "raid_bdev1", 00:27:06.473 "method": "bdev_raid_add_base_bdev", 00:27:06.473 "req_id": 1 00:27:06.473 } 00:27:06.473 Got JSON-RPC error response 00:27:06.473 response: 00:27:06.473 { 00:27:06.473 "code": -22, 00:27:06.473 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:06.473 } 00:27:06.473 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:27:06.473 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.473 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:06.473 10:37:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.473 10:37:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.407 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.408 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.408 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.408 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.408 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.666 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.666 "name": "raid_bdev1", 00:27:07.666 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:07.666 "strip_size_kb": 0, 00:27:07.666 "state": "online", 00:27:07.666 "raid_level": "raid1", 00:27:07.666 "superblock": true, 00:27:07.666 "num_base_bdevs": 4, 00:27:07.667 "num_base_bdevs_discovered": 2, 00:27:07.667 "num_base_bdevs_operational": 2, 00:27:07.667 "base_bdevs_list": [ 00:27:07.667 { 00:27:07.667 "name": null, 00:27:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.667 "is_configured": false, 00:27:07.667 "data_offset": 2048, 00:27:07.667 "data_size": 63488 00:27:07.667 }, 00:27:07.667 { 00:27:07.667 "name": null, 00:27:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.667 "is_configured": false, 00:27:07.667 "data_offset": 2048, 00:27:07.667 "data_size": 63488 00:27:07.667 }, 00:27:07.667 { 00:27:07.667 "name": "BaseBdev3", 00:27:07.667 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:07.667 "is_configured": true, 00:27:07.667 "data_offset": 2048, 00:27:07.667 "data_size": 63488 00:27:07.667 }, 00:27:07.667 { 00:27:07.667 "name": "BaseBdev4", 00:27:07.667 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:07.667 "is_configured": true, 00:27:07.667 "data_offset": 2048, 00:27:07.667 "data_size": 63488 00:27:07.667 } 00:27:07.667 ] 00:27:07.667 }' 00:27:07.667 10:37:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.667 10:37:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.233 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.491 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:08.491 "name": "raid_bdev1", 00:27:08.491 "uuid": "ad32a6dc-e582-47b8-804b-fa9046d53e8c", 00:27:08.491 "strip_size_kb": 0, 00:27:08.491 "state": "online", 00:27:08.491 "raid_level": "raid1", 00:27:08.491 "superblock": true, 00:27:08.491 "num_base_bdevs": 4, 00:27:08.491 "num_base_bdevs_discovered": 2, 00:27:08.491 "num_base_bdevs_operational": 2, 00:27:08.491 "base_bdevs_list": [ 00:27:08.491 { 00:27:08.491 "name": null, 00:27:08.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.491 "is_configured": false, 00:27:08.491 "data_offset": 2048, 00:27:08.491 "data_size": 63488 00:27:08.491 }, 00:27:08.491 { 00:27:08.491 "name": null, 00:27:08.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.491 "is_configured": false, 00:27:08.491 "data_offset": 2048, 00:27:08.491 "data_size": 63488 00:27:08.491 }, 00:27:08.491 { 00:27:08.491 "name": "BaseBdev3", 00:27:08.491 "uuid": "cd2906b3-a32c-5e8c-bc60-6866ebd204d3", 00:27:08.491 "is_configured": true, 00:27:08.491 "data_offset": 2048, 00:27:08.491 "data_size": 63488 00:27:08.491 }, 00:27:08.491 { 00:27:08.491 "name": "BaseBdev4", 00:27:08.491 "uuid": "bde9afb8-22d9-5ddb-88b3-50ee1a9e6d6c", 00:27:08.491 "is_configured": true, 00:27:08.491 "data_offset": 2048, 00:27:08.491 "data_size": 63488 00:27:08.491 } 00:27:08.491 ] 00:27:08.491 }' 00:27:08.491 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 3495679 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3495679 ']' 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 3495679 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.492 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3495679 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3495679' 00:27:08.750 killing process with pid 3495679 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 3495679 00:27:08.750 Received shutdown signal, test time was about 60.000000 seconds 00:27:08.750 00:27:08.750 Latency(us) 00:27:08.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.750 =================================================================================================================== 00:27:08.750 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:08.750 [2024-07-26 10:37:21.437241] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:08.750 [2024-07-26 10:37:21.437323] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.750 [2024-07-26 10:37:21.437381] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.750 [2024-07-26 10:37:21.437392] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23308b0 name raid_bdev1, state offline 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 3495679 00:27:08.750 [2024-07-26 10:37:21.475768] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:27:08.750 00:27:08.750 real 0m36.683s 00:27:08.750 user 0m53.137s 00:27:08.750 sys 0m6.518s 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.750 10:37:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.750 ************************************ 00:27:08.750 END TEST raid_rebuild_test_sb 00:27:08.750 ************************************ 00:27:09.009 10:37:21 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:27:09.009 10:37:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:09.009 10:37:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.009 10:37:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 ************************************ 00:27:09.009 START TEST raid_rebuild_test_io 00:27:09.009 ************************************ 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3502205 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3502205 /var/tmp/spdk-raid.sock 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 3502205 ']' 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:09.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.009 10:37:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 [2024-07-26 10:37:21.810101] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:09.009 [2024-07-26 10:37:21.810175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502205 ] 00:27:09.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:09.009 Zero copy mechanism will not be used. 00:27:09.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.009 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:09.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.009 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:09.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.009 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:09.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:09.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:09.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:09.272 [2024-07-26 10:37:21.945520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.272 [2024-07-26 10:37:21.989618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.272 [2024-07-26 10:37:22.045759] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.272 [2024-07-26 10:37:22.045788] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.889 10:37:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.889 10:37:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:27:09.889 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:09.889 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:10.147 BaseBdev1_malloc 00:27:10.147 10:37:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:10.405 [2024-07-26 10:37:23.147443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:10.405 [2024-07-26 10:37:23.147490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.405 [2024-07-26 10:37:23.147511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2028370 00:27:10.405 [2024-07-26 10:37:23.147523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.405 [2024-07-26 10:37:23.148906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.405 [2024-07-26 10:37:23.148936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:10.405 BaseBdev1 00:27:10.405 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:10.405 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:10.662 BaseBdev2_malloc 00:27:10.662 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:10.919 [2024-07-26 10:37:23.609087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:10.919 [2024-07-26 10:37:23.609126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.919 [2024-07-26 10:37:23.609147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fe40d0 00:27:10.919 [2024-07-26 10:37:23.609159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.919 [2024-07-26 10:37:23.610509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.919 [2024-07-26 10:37:23.610534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:10.919 BaseBdev2 00:27:10.919 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:10.919 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:11.177 BaseBdev3_malloc 00:27:11.177 10:37:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:11.177 [2024-07-26 10:37:24.058486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:11.177 [2024-07-26 10:37:24.058522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.177 [2024-07-26 10:37:24.058539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fcf9f0 00:27:11.177 [2024-07-26 10:37:24.058550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.177 [2024-07-26 10:37:24.059763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.177 [2024-07-26 10:37:24.059787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:11.177 BaseBdev3 00:27:11.177 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:11.177 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:11.435 BaseBdev4_malloc 00:27:11.435 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:11.693 [2024-07-26 10:37:24.519766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:11.693 [2024-07-26 10:37:24.519804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.693 [2024-07-26 10:37:24.519825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd1270 00:27:11.693 [2024-07-26 10:37:24.519837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.693 [2024-07-26 10:37:24.521108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.693 [2024-07-26 10:37:24.521134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:11.693 BaseBdev4 00:27:11.693 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:11.950 spare_malloc 00:27:11.950 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:12.208 spare_delay 00:27:12.208 10:37:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:12.466 [2024-07-26 10:37:25.197740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:12.466 [2024-07-26 10:37:25.197781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.466 [2024-07-26 10:37:25.197801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd3700 00:27:12.466 [2024-07-26 10:37:25.197812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.466 [2024-07-26 10:37:25.199078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.466 [2024-07-26 10:37:25.199105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:12.466 spare 00:27:12.467 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:12.725 [2024-07-26 10:37:25.422347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:12.725 [2024-07-26 10:37:25.423421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:12.725 [2024-07-26 10:37:25.423472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:12.725 [2024-07-26 10:37:25.423513] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:12.725 [2024-07-26 10:37:25.423578] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fd5840 00:27:12.725 [2024-07-26 10:37:25.423587] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:12.725 [2024-07-26 10:37:25.423769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fd9180 00:27:12.725 [2024-07-26 10:37:25.423893] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fd5840 00:27:12.725 [2024-07-26 10:37:25.423902] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fd5840 00:27:12.725 [2024-07-26 10:37:25.423996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.725 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.983 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.983 "name": "raid_bdev1", 00:27:12.983 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:12.983 "strip_size_kb": 0, 00:27:12.983 "state": "online", 00:27:12.983 "raid_level": "raid1", 00:27:12.983 "superblock": false, 00:27:12.983 "num_base_bdevs": 4, 00:27:12.983 "num_base_bdevs_discovered": 4, 00:27:12.983 "num_base_bdevs_operational": 4, 00:27:12.983 "base_bdevs_list": [ 00:27:12.983 { 00:27:12.983 "name": "BaseBdev1", 00:27:12.983 "uuid": "1c7a11e0-3fba-5adb-b276-2819fed41d5a", 00:27:12.983 "is_configured": true, 00:27:12.983 "data_offset": 0, 00:27:12.983 "data_size": 65536 00:27:12.983 }, 00:27:12.983 { 00:27:12.983 "name": "BaseBdev2", 00:27:12.983 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:12.983 "is_configured": true, 00:27:12.983 "data_offset": 0, 00:27:12.983 "data_size": 65536 00:27:12.983 }, 00:27:12.983 { 00:27:12.983 "name": "BaseBdev3", 00:27:12.983 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:12.983 "is_configured": true, 00:27:12.983 "data_offset": 0, 00:27:12.983 "data_size": 65536 00:27:12.983 }, 00:27:12.983 { 00:27:12.983 "name": "BaseBdev4", 00:27:12.983 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:12.983 "is_configured": true, 00:27:12.983 "data_offset": 0, 00:27:12.983 "data_size": 65536 00:27:12.983 } 00:27:12.983 ] 00:27:12.983 }' 00:27:12.983 10:37:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.983 10:37:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:13.550 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.550 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:13.808 [2024-07-26 10:37:26.465364] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.808 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:27:13.808 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.808 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:14.067 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:27:14.067 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:27:14.067 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:14.067 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:14.067 [2024-07-26 10:37:26.819982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fda8a0 00:27:14.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:14.067 Zero copy mechanism will not be used. 00:27:14.067 Running I/O for 60 seconds... 00:27:14.067 [2024-07-26 10:37:26.939253] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:14.067 [2024-07-26 10:37:26.954177] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fda8a0 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.326 10:37:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.585 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.585 "name": "raid_bdev1", 00:27:14.585 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:14.585 "strip_size_kb": 0, 00:27:14.585 "state": "online", 00:27:14.585 "raid_level": "raid1", 00:27:14.585 "superblock": false, 00:27:14.585 "num_base_bdevs": 4, 00:27:14.585 "num_base_bdevs_discovered": 3, 00:27:14.585 "num_base_bdevs_operational": 3, 00:27:14.585 "base_bdevs_list": [ 00:27:14.585 { 00:27:14.585 "name": null, 00:27:14.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.585 "is_configured": false, 00:27:14.585 "data_offset": 0, 00:27:14.585 "data_size": 65536 00:27:14.585 }, 00:27:14.585 { 00:27:14.585 "name": "BaseBdev2", 00:27:14.585 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:14.585 "is_configured": true, 00:27:14.585 "data_offset": 0, 00:27:14.585 "data_size": 65536 00:27:14.585 }, 00:27:14.585 { 00:27:14.585 "name": "BaseBdev3", 00:27:14.585 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:14.585 "is_configured": true, 00:27:14.585 "data_offset": 0, 00:27:14.585 "data_size": 65536 00:27:14.585 }, 00:27:14.585 { 00:27:14.585 "name": "BaseBdev4", 00:27:14.585 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:14.585 "is_configured": true, 00:27:14.585 "data_offset": 0, 00:27:14.585 "data_size": 65536 00:27:14.585 } 00:27:14.585 ] 00:27:14.585 }' 00:27:14.585 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.585 10:37:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.152 10:37:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:15.152 [2024-07-26 10:37:28.041420] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:15.411 10:37:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:15.411 [2024-07-26 10:37:28.118025] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20280b0 00:27:15.411 [2024-07-26 10:37:28.120215] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:15.411 [2024-07-26 10:37:28.230935] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:15.411 [2024-07-26 10:37:28.232052] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:15.669 [2024-07-26 10:37:28.443783] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:15.669 [2024-07-26 10:37:28.443950] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:15.927 [2024-07-26 10:37:28.689948] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:16.185 [2024-07-26 10:37:28.902496] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:16.185 [2024-07-26 10:37:28.902985] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.443 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.443 [2024-07-26 10:37:29.224390] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:17.045 "name": "raid_bdev1", 00:27:17.045 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:17.045 "strip_size_kb": 0, 00:27:17.045 "state": "online", 00:27:17.045 "raid_level": "raid1", 00:27:17.045 "superblock": false, 00:27:17.045 "num_base_bdevs": 4, 00:27:17.045 "num_base_bdevs_discovered": 4, 00:27:17.045 "num_base_bdevs_operational": 4, 00:27:17.045 "process": { 00:27:17.045 "type": "rebuild", 00:27:17.045 "target": "spare", 00:27:17.045 "progress": { 00:27:17.045 "blocks": 14336, 00:27:17.045 "percent": 21 00:27:17.045 } 00:27:17.045 }, 00:27:17.045 "base_bdevs_list": [ 00:27:17.045 { 00:27:17.045 "name": "spare", 00:27:17.045 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:17.045 "is_configured": true, 00:27:17.045 "data_offset": 0, 00:27:17.045 "data_size": 65536 00:27:17.045 }, 00:27:17.045 { 00:27:17.045 "name": "BaseBdev2", 00:27:17.045 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:17.045 "is_configured": true, 00:27:17.045 "data_offset": 0, 00:27:17.045 "data_size": 65536 00:27:17.045 }, 00:27:17.045 { 00:27:17.045 "name": "BaseBdev3", 00:27:17.045 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:17.045 "is_configured": true, 00:27:17.045 "data_offset": 0, 00:27:17.045 "data_size": 65536 00:27:17.045 }, 00:27:17.045 { 00:27:17.045 "name": "BaseBdev4", 00:27:17.045 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:17.045 "is_configured": true, 00:27:17.045 "data_offset": 0, 00:27:17.045 "data_size": 65536 00:27:17.045 } 00:27:17.045 ] 00:27:17.045 }' 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:17.045 [2024-07-26 10:37:29.472763] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:17.045 [2024-07-26 10:37:29.652691] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:17.045 [2024-07-26 10:37:29.748360] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:17.045 [2024-07-26 10:37:29.768543] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.045 [2024-07-26 10:37:29.768573] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:17.045 [2024-07-26 10:37:29.768582] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:17.045 [2024-07-26 10:37:29.781363] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fda8a0 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:17.045 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.046 10:37:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.303 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.303 "name": "raid_bdev1", 00:27:17.303 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:17.303 "strip_size_kb": 0, 00:27:17.303 "state": "online", 00:27:17.303 "raid_level": "raid1", 00:27:17.303 "superblock": false, 00:27:17.303 "num_base_bdevs": 4, 00:27:17.303 "num_base_bdevs_discovered": 3, 00:27:17.303 "num_base_bdevs_operational": 3, 00:27:17.303 "base_bdevs_list": [ 00:27:17.303 { 00:27:17.303 "name": null, 00:27:17.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.303 "is_configured": false, 00:27:17.303 "data_offset": 0, 00:27:17.303 "data_size": 65536 00:27:17.303 }, 00:27:17.303 { 00:27:17.303 "name": "BaseBdev2", 00:27:17.303 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:17.303 "is_configured": true, 00:27:17.303 "data_offset": 0, 00:27:17.303 "data_size": 65536 00:27:17.303 }, 00:27:17.303 { 00:27:17.303 "name": "BaseBdev3", 00:27:17.303 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:17.303 "is_configured": true, 00:27:17.303 "data_offset": 0, 00:27:17.303 "data_size": 65536 00:27:17.303 }, 00:27:17.303 { 00:27:17.303 "name": "BaseBdev4", 00:27:17.303 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:17.303 "is_configured": true, 00:27:17.303 "data_offset": 0, 00:27:17.303 "data_size": 65536 00:27:17.303 } 00:27:17.303 ] 00:27:17.303 }' 00:27:17.303 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.303 10:37:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.868 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.129 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.129 "name": "raid_bdev1", 00:27:18.129 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:18.129 "strip_size_kb": 0, 00:27:18.130 "state": "online", 00:27:18.130 "raid_level": "raid1", 00:27:18.130 "superblock": false, 00:27:18.130 "num_base_bdevs": 4, 00:27:18.130 "num_base_bdevs_discovered": 3, 00:27:18.130 "num_base_bdevs_operational": 3, 00:27:18.130 "base_bdevs_list": [ 00:27:18.130 { 00:27:18.130 "name": null, 00:27:18.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.130 "is_configured": false, 00:27:18.130 "data_offset": 0, 00:27:18.130 "data_size": 65536 00:27:18.130 }, 00:27:18.130 { 00:27:18.130 "name": "BaseBdev2", 00:27:18.130 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:18.130 "is_configured": true, 00:27:18.130 "data_offset": 0, 00:27:18.130 "data_size": 65536 00:27:18.130 }, 00:27:18.130 { 00:27:18.130 "name": "BaseBdev3", 00:27:18.130 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:18.130 "is_configured": true, 00:27:18.130 "data_offset": 0, 00:27:18.130 "data_size": 65536 00:27:18.130 }, 00:27:18.130 { 00:27:18.130 "name": "BaseBdev4", 00:27:18.130 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:18.130 "is_configured": true, 00:27:18.130 "data_offset": 0, 00:27:18.130 "data_size": 65536 00:27:18.130 } 00:27:18.130 ] 00:27:18.130 }' 00:27:18.130 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.130 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:18.130 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.130 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:18.130 10:37:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:18.387 [2024-07-26 10:37:31.183674] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:18.387 10:37:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:18.387 [2024-07-26 10:37:31.277028] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1faa680 00:27:18.387 [2024-07-26 10:37:31.278431] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:18.644 [2024-07-26 10:37:31.405886] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:18.644 [2024-07-26 10:37:31.407010] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:18.901 [2024-07-26 10:37:31.610528] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:18.901 [2024-07-26 10:37:31.610678] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:19.159 [2024-07-26 10:37:31.952395] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:19.417 [2024-07-26 10:37:32.079926] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.417 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.417 [2024-07-26 10:37:32.308915] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:19.675 [2024-07-26 10:37:32.428838] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:19.675 [2024-07-26 10:37:32.429049] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:19.675 "name": "raid_bdev1", 00:27:19.675 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:19.675 "strip_size_kb": 0, 00:27:19.675 "state": "online", 00:27:19.675 "raid_level": "raid1", 00:27:19.675 "superblock": false, 00:27:19.675 "num_base_bdevs": 4, 00:27:19.675 "num_base_bdevs_discovered": 4, 00:27:19.675 "num_base_bdevs_operational": 4, 00:27:19.675 "process": { 00:27:19.675 "type": "rebuild", 00:27:19.675 "target": "spare", 00:27:19.675 "progress": { 00:27:19.675 "blocks": 16384, 00:27:19.675 "percent": 25 00:27:19.675 } 00:27:19.675 }, 00:27:19.675 "base_bdevs_list": [ 00:27:19.675 { 00:27:19.675 "name": "spare", 00:27:19.675 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:19.675 "is_configured": true, 00:27:19.675 "data_offset": 0, 00:27:19.675 "data_size": 65536 00:27:19.675 }, 00:27:19.675 { 00:27:19.675 "name": "BaseBdev2", 00:27:19.675 "uuid": "88568c1a-cbba-5256-ae5a-800dcf469508", 00:27:19.675 "is_configured": true, 00:27:19.675 "data_offset": 0, 00:27:19.675 "data_size": 65536 00:27:19.675 }, 00:27:19.675 { 00:27:19.675 "name": "BaseBdev3", 00:27:19.675 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:19.675 "is_configured": true, 00:27:19.675 "data_offset": 0, 00:27:19.675 "data_size": 65536 00:27:19.675 }, 00:27:19.675 { 00:27:19.675 "name": "BaseBdev4", 00:27:19.675 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:19.675 "is_configured": true, 00:27:19.675 "data_offset": 0, 00:27:19.675 "data_size": 65536 00:27:19.675 } 00:27:19.675 ] 00:27:19.675 }' 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:27:19.675 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:19.933 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:27:19.933 10:37:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:19.933 [2024-07-26 10:37:32.797052] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:19.933 [2024-07-26 10:37:32.817920] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:20.500 [2024-07-26 10:37:33.129079] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1fda8a0 00:27:20.500 [2024-07-26 10:37:33.129108] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1faa680 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.500 [2024-07-26 10:37:33.249905] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:20.500 [2024-07-26 10:37:33.360907] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.500 "name": "raid_bdev1", 00:27:20.500 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:20.500 "strip_size_kb": 0, 00:27:20.500 "state": "online", 00:27:20.500 "raid_level": "raid1", 00:27:20.500 "superblock": false, 00:27:20.500 "num_base_bdevs": 4, 00:27:20.500 "num_base_bdevs_discovered": 3, 00:27:20.500 "num_base_bdevs_operational": 3, 00:27:20.500 "process": { 00:27:20.500 "type": "rebuild", 00:27:20.500 "target": "spare", 00:27:20.500 "progress": { 00:27:20.500 "blocks": 28672, 00:27:20.500 "percent": 43 00:27:20.500 } 00:27:20.500 }, 00:27:20.500 "base_bdevs_list": [ 00:27:20.500 { 00:27:20.500 "name": "spare", 00:27:20.500 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:20.500 "is_configured": true, 00:27:20.500 "data_offset": 0, 00:27:20.500 "data_size": 65536 00:27:20.500 }, 00:27:20.500 { 00:27:20.500 "name": null, 00:27:20.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.500 "is_configured": false, 00:27:20.500 "data_offset": 0, 00:27:20.500 "data_size": 65536 00:27:20.500 }, 00:27:20.500 { 00:27:20.500 "name": "BaseBdev3", 00:27:20.500 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:20.500 "is_configured": true, 00:27:20.500 "data_offset": 0, 00:27:20.500 "data_size": 65536 00:27:20.500 }, 00:27:20.500 { 00:27:20.500 "name": "BaseBdev4", 00:27:20.500 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:20.500 "is_configured": true, 00:27:20.500 "data_offset": 0, 00:27:20.500 "data_size": 65536 00:27:20.500 } 00:27:20.500 ] 00:27:20.500 }' 00:27:20.500 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=897 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.758 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.016 [2024-07-26 10:37:33.718306] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.016 "name": "raid_bdev1", 00:27:21.016 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:21.016 "strip_size_kb": 0, 00:27:21.016 "state": "online", 00:27:21.016 "raid_level": "raid1", 00:27:21.016 "superblock": false, 00:27:21.016 "num_base_bdevs": 4, 00:27:21.016 "num_base_bdevs_discovered": 3, 00:27:21.016 "num_base_bdevs_operational": 3, 00:27:21.016 "process": { 00:27:21.016 "type": "rebuild", 00:27:21.016 "target": "spare", 00:27:21.016 "progress": { 00:27:21.016 "blocks": 30720, 00:27:21.016 "percent": 46 00:27:21.016 } 00:27:21.016 }, 00:27:21.016 "base_bdevs_list": [ 00:27:21.016 { 00:27:21.016 "name": "spare", 00:27:21.016 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:21.016 "is_configured": true, 00:27:21.016 "data_offset": 0, 00:27:21.016 "data_size": 65536 00:27:21.016 }, 00:27:21.016 { 00:27:21.016 "name": null, 00:27:21.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.016 "is_configured": false, 00:27:21.016 "data_offset": 0, 00:27:21.016 "data_size": 65536 00:27:21.016 }, 00:27:21.016 { 00:27:21.016 "name": "BaseBdev3", 00:27:21.016 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:21.016 "is_configured": true, 00:27:21.016 "data_offset": 0, 00:27:21.016 "data_size": 65536 00:27:21.016 }, 00:27:21.016 { 00:27:21.016 "name": "BaseBdev4", 00:27:21.016 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:21.016 "is_configured": true, 00:27:21.016 "data_offset": 0, 00:27:21.016 "data_size": 65536 00:27:21.016 } 00:27:21.016 ] 00:27:21.016 }' 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.016 10:37:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:21.274 [2024-07-26 10:37:33.943995] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:21.532 [2024-07-26 10:37:34.323279] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:21.533 [2024-07-26 10:37:34.431552] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:21.533 [2024-07-26 10:37:34.431705] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:22.098 [2024-07-26 10:37:34.771358] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:22.098 [2024-07-26 10:37:34.771625] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.098 10:37:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.356 "name": "raid_bdev1", 00:27:22.356 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:22.356 "strip_size_kb": 0, 00:27:22.356 "state": "online", 00:27:22.356 "raid_level": "raid1", 00:27:22.356 "superblock": false, 00:27:22.356 "num_base_bdevs": 4, 00:27:22.356 "num_base_bdevs_discovered": 3, 00:27:22.356 "num_base_bdevs_operational": 3, 00:27:22.356 "process": { 00:27:22.356 "type": "rebuild", 00:27:22.356 "target": "spare", 00:27:22.356 "progress": { 00:27:22.356 "blocks": 49152, 00:27:22.356 "percent": 75 00:27:22.356 } 00:27:22.356 }, 00:27:22.356 "base_bdevs_list": [ 00:27:22.356 { 00:27:22.356 "name": "spare", 00:27:22.356 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:22.356 "is_configured": true, 00:27:22.356 "data_offset": 0, 00:27:22.356 "data_size": 65536 00:27:22.356 }, 00:27:22.356 { 00:27:22.356 "name": null, 00:27:22.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.356 "is_configured": false, 00:27:22.356 "data_offset": 0, 00:27:22.356 "data_size": 65536 00:27:22.356 }, 00:27:22.356 { 00:27:22.356 "name": "BaseBdev3", 00:27:22.356 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:22.356 "is_configured": true, 00:27:22.356 "data_offset": 0, 00:27:22.356 "data_size": 65536 00:27:22.356 }, 00:27:22.356 { 00:27:22.356 "name": "BaseBdev4", 00:27:22.356 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:22.356 "is_configured": true, 00:27:22.356 "data_offset": 0, 00:27:22.356 "data_size": 65536 00:27:22.356 } 00:27:22.356 ] 00:27:22.356 }' 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.356 10:37:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:22.923 [2024-07-26 10:37:35.545837] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:23.182 [2024-07-26 10:37:35.885132] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:23.182 [2024-07-26 10:37:35.985432] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:23.182 [2024-07-26 10:37:35.987346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.441 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.699 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.699 "name": "raid_bdev1", 00:27:23.699 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:23.699 "strip_size_kb": 0, 00:27:23.699 "state": "online", 00:27:23.699 "raid_level": "raid1", 00:27:23.699 "superblock": false, 00:27:23.699 "num_base_bdevs": 4, 00:27:23.699 "num_base_bdevs_discovered": 3, 00:27:23.699 "num_base_bdevs_operational": 3, 00:27:23.699 "base_bdevs_list": [ 00:27:23.699 { 00:27:23.699 "name": "spare", 00:27:23.699 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:23.699 "is_configured": true, 00:27:23.699 "data_offset": 0, 00:27:23.699 "data_size": 65536 00:27:23.700 }, 00:27:23.700 { 00:27:23.700 "name": null, 00:27:23.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.700 "is_configured": false, 00:27:23.700 "data_offset": 0, 00:27:23.700 "data_size": 65536 00:27:23.700 }, 00:27:23.700 { 00:27:23.700 "name": "BaseBdev3", 00:27:23.700 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:23.700 "is_configured": true, 00:27:23.700 "data_offset": 0, 00:27:23.700 "data_size": 65536 00:27:23.700 }, 00:27:23.700 { 00:27:23.700 "name": "BaseBdev4", 00:27:23.700 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:23.700 "is_configured": true, 00:27:23.700 "data_offset": 0, 00:27:23.700 "data_size": 65536 00:27:23.700 } 00:27:23.700 ] 00:27:23.700 }' 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.700 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.958 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.958 "name": "raid_bdev1", 00:27:23.958 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:23.958 "strip_size_kb": 0, 00:27:23.958 "state": "online", 00:27:23.958 "raid_level": "raid1", 00:27:23.958 "superblock": false, 00:27:23.958 "num_base_bdevs": 4, 00:27:23.958 "num_base_bdevs_discovered": 3, 00:27:23.958 "num_base_bdevs_operational": 3, 00:27:23.958 "base_bdevs_list": [ 00:27:23.958 { 00:27:23.958 "name": "spare", 00:27:23.959 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:23.959 "is_configured": true, 00:27:23.959 "data_offset": 0, 00:27:23.959 "data_size": 65536 00:27:23.959 }, 00:27:23.959 { 00:27:23.959 "name": null, 00:27:23.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.959 "is_configured": false, 00:27:23.959 "data_offset": 0, 00:27:23.959 "data_size": 65536 00:27:23.959 }, 00:27:23.959 { 00:27:23.959 "name": "BaseBdev3", 00:27:23.959 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:23.959 "is_configured": true, 00:27:23.959 "data_offset": 0, 00:27:23.959 "data_size": 65536 00:27:23.959 }, 00:27:23.959 { 00:27:23.959 "name": "BaseBdev4", 00:27:23.959 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:23.959 "is_configured": true, 00:27:23.959 "data_offset": 0, 00:27:23.959 "data_size": 65536 00:27:23.959 } 00:27:23.959 ] 00:27:23.959 }' 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.959 10:37:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.271 10:37:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.271 "name": "raid_bdev1", 00:27:24.271 "uuid": "d51a2269-4499-41ee-a578-df8502f310ff", 00:27:24.271 "strip_size_kb": 0, 00:27:24.271 "state": "online", 00:27:24.271 "raid_level": "raid1", 00:27:24.271 "superblock": false, 00:27:24.271 "num_base_bdevs": 4, 00:27:24.271 "num_base_bdevs_discovered": 3, 00:27:24.271 "num_base_bdevs_operational": 3, 00:27:24.271 "base_bdevs_list": [ 00:27:24.271 { 00:27:24.271 "name": "spare", 00:27:24.271 "uuid": "bba7f498-c1ee-5ad9-a3c1-78e2aa7cb98b", 00:27:24.271 "is_configured": true, 00:27:24.271 "data_offset": 0, 00:27:24.271 "data_size": 65536 00:27:24.271 }, 00:27:24.271 { 00:27:24.271 "name": null, 00:27:24.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.271 "is_configured": false, 00:27:24.271 "data_offset": 0, 00:27:24.271 "data_size": 65536 00:27:24.271 }, 00:27:24.271 { 00:27:24.271 "name": "BaseBdev3", 00:27:24.271 "uuid": "10150b57-a8dc-5fc7-974a-b99ef8c9755f", 00:27:24.271 "is_configured": true, 00:27:24.271 "data_offset": 0, 00:27:24.271 "data_size": 65536 00:27:24.271 }, 00:27:24.271 { 00:27:24.271 "name": "BaseBdev4", 00:27:24.271 "uuid": "4dbca2ed-ef8a-5b2a-ab52-829fdb4fe835", 00:27:24.271 "is_configured": true, 00:27:24.271 "data_offset": 0, 00:27:24.271 "data_size": 65536 00:27:24.271 } 00:27:24.271 ] 00:27:24.271 }' 00:27:24.271 10:37:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.271 10:37:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.847 10:37:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:25.107 [2024-07-26 10:37:37.815998] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.107 [2024-07-26 10:37:37.816028] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:25.107 00:27:25.107 Latency(us) 00:27:25.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.107 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:25.107 raid_bdev1 : 11.00 96.01 288.02 0.00 0.00 13718.92 275.25 110729.63 00:27:25.107 =================================================================================================================== 00:27:25.107 Total : 96.01 288.02 0.00 0.00 13718.92 275.25 110729.63 00:27:25.107 [2024-07-26 10:37:37.851709] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.107 [2024-07-26 10:37:37.851736] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:25.107 [2024-07-26 10:37:37.851822] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:25.107 [2024-07-26 10:37:37.851833] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd5840 name raid_bdev1, state offline 00:27:25.107 0 00:27:25.107 10:37:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:27:25.107 10:37:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:25.366 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:25.367 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:25.367 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:25.367 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:25.626 /dev/nbd0 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:25.626 1+0 records in 00:27:25.626 1+0 records out 00:27:25.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255906 s, 16.0 MB/s 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:25.626 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:27:25.886 /dev/nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:25.886 1+0 records in 00:27:25.886 1+0 records out 00:27:25.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243725 s, 16.8 MB/s 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:25.886 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.145 10:37:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:27:26.405 /dev/nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.405 1+0 records in 00:27:26.405 1+0 records out 00:27:26.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268179 s, 15.3 MB/s 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.405 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.664 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 3502205 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 3502205 ']' 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 3502205 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.924 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3502205 00:27:27.183 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:27.183 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:27.183 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3502205' 00:27:27.183 killing process with pid 3502205 00:27:27.183 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 3502205 00:27:27.183 Received shutdown signal, test time was about 13.015177 seconds 00:27:27.183 00:27:27.183 Latency(us) 00:27:27.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.183 =================================================================================================================== 00:27:27.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.183 [2024-07-26 10:37:39.869017] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:27.183 10:37:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 3502205 00:27:27.183 [2024-07-26 10:37:39.902369] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:27.183 10:37:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:27:27.183 00:27:27.183 real 0m18.344s 00:27:27.183 user 0m28.157s 00:27:27.183 sys 0m3.391s 00:27:27.183 10:37:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:27.183 10:37:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.183 ************************************ 00:27:27.183 END TEST raid_rebuild_test_io 00:27:27.183 ************************************ 00:27:27.442 10:37:40 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:27:27.442 10:37:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:27.442 10:37:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:27.442 10:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:27.442 ************************************ 00:27:27.442 START TEST raid_rebuild_test_sb_io 00:27:27.442 ************************************ 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3505423 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3505423 /var/tmp/spdk-raid.sock 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 3505423 ']' 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:27.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:27.442 10:37:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.442 [2024-07-26 10:37:40.238837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:27.442 [2024-07-26 10:37:40.238894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505423 ] 00:27:27.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:27.442 Zero copy mechanism will not be used. 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:27.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.442 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:27.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.443 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:27.702 [2024-07-26 10:37:40.373765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.702 [2024-07-26 10:37:40.417219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.702 [2024-07-26 10:37:40.479585] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.702 [2024-07-26 10:37:40.479627] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.269 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.269 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:27:28.269 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:28.269 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:28.528 BaseBdev1_malloc 00:27:28.528 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:28.787 [2024-07-26 10:37:41.575832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:28.787 [2024-07-26 10:37:41.575874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.787 [2024-07-26 10:37:41.575893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2792370 00:27:28.787 [2024-07-26 10:37:41.575904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.787 [2024-07-26 10:37:41.577239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.787 [2024-07-26 10:37:41.577265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:28.787 BaseBdev1 00:27:28.787 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:28.787 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:29.046 BaseBdev2_malloc 00:27:29.046 10:37:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:29.305 [2024-07-26 10:37:42.029350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:29.305 [2024-07-26 10:37:42.029388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.305 [2024-07-26 10:37:42.029405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x274e0d0 00:27:29.305 [2024-07-26 10:37:42.029416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.305 [2024-07-26 10:37:42.030792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.305 [2024-07-26 10:37:42.030817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:29.305 BaseBdev2 00:27:29.305 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:29.305 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:29.564 BaseBdev3_malloc 00:27:29.564 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:29.823 [2024-07-26 10:37:42.482803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:29.823 [2024-07-26 10:37:42.482840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.823 [2024-07-26 10:37:42.482857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27399f0 00:27:29.823 [2024-07-26 10:37:42.482868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.823 [2024-07-26 10:37:42.484093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.823 [2024-07-26 10:37:42.484118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:29.823 BaseBdev3 00:27:29.823 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:29.823 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:29.823 BaseBdev4_malloc 00:27:30.082 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:30.082 [2024-07-26 10:37:42.927940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:30.082 [2024-07-26 10:37:42.927976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.082 [2024-07-26 10:37:42.927996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273b270 00:27:30.082 [2024-07-26 10:37:42.928008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.082 [2024-07-26 10:37:42.929223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.082 [2024-07-26 10:37:42.929247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:30.082 BaseBdev4 00:27:30.082 10:37:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:30.340 spare_malloc 00:27:30.340 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:30.600 spare_delay 00:27:30.600 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:30.860 [2024-07-26 10:37:43.601680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:30.860 [2024-07-26 10:37:43.601717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.860 [2024-07-26 10:37:43.601736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273d700 00:27:30.860 [2024-07-26 10:37:43.601747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.860 [2024-07-26 10:37:43.602985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.860 [2024-07-26 10:37:43.603011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:30.860 spare 00:27:30.860 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:31.118 [2024-07-26 10:37:43.830316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:31.118 [2024-07-26 10:37:43.831353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:31.118 [2024-07-26 10:37:43.831404] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:31.118 [2024-07-26 10:37:43.831444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:31.119 [2024-07-26 10:37:43.831601] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x273f840 00:27:31.119 [2024-07-26 10:37:43.831611] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:31.119 [2024-07-26 10:37:43.831778] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2743140 00:27:31.119 [2024-07-26 10:37:43.831902] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x273f840 00:27:31.119 [2024-07-26 10:37:43.831911] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x273f840 00:27:31.119 [2024-07-26 10:37:43.832002] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.119 10:37:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.377 "name": "raid_bdev1", 00:27:31.377 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:31.377 "strip_size_kb": 0, 00:27:31.377 "state": "online", 00:27:31.377 "raid_level": "raid1", 00:27:31.377 "superblock": true, 00:27:31.377 "num_base_bdevs": 4, 00:27:31.377 "num_base_bdevs_discovered": 4, 00:27:31.377 "num_base_bdevs_operational": 4, 00:27:31.377 "base_bdevs_list": [ 00:27:31.377 { 00:27:31.377 "name": "BaseBdev1", 00:27:31.377 "uuid": "28627141-43c2-517c-86e1-816d4b608288", 00:27:31.377 "is_configured": true, 00:27:31.377 "data_offset": 2048, 00:27:31.377 "data_size": 63488 00:27:31.377 }, 00:27:31.377 { 00:27:31.377 "name": "BaseBdev2", 00:27:31.377 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:31.377 "is_configured": true, 00:27:31.377 "data_offset": 2048, 00:27:31.377 "data_size": 63488 00:27:31.377 }, 00:27:31.377 { 00:27:31.377 "name": "BaseBdev3", 00:27:31.377 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:31.377 "is_configured": true, 00:27:31.377 "data_offset": 2048, 00:27:31.377 "data_size": 63488 00:27:31.377 }, 00:27:31.377 { 00:27:31.377 "name": "BaseBdev4", 00:27:31.377 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:31.377 "is_configured": true, 00:27:31.377 "data_offset": 2048, 00:27:31.377 "data_size": 63488 00:27:31.377 } 00:27:31.377 ] 00:27:31.377 }' 00:27:31.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.377 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.945 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:31.945 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:32.204 [2024-07-26 10:37:44.849256] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.204 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:27:32.204 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.204 10:37:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:32.204 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:32.204 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:27:32.463 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:32.463 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:32.463 [2024-07-26 10:37:45.211880] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27447d0 00:27:32.463 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:32.463 Zero copy mechanism will not be used. 00:27:32.463 Running I/O for 60 seconds... 00:27:32.463 [2024-07-26 10:37:45.330803] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:32.463 [2024-07-26 10:37:45.345761] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x27447d0 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.722 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.981 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.981 "name": "raid_bdev1", 00:27:32.981 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:32.981 "strip_size_kb": 0, 00:27:32.981 "state": "online", 00:27:32.981 "raid_level": "raid1", 00:27:32.981 "superblock": true, 00:27:32.981 "num_base_bdevs": 4, 00:27:32.981 "num_base_bdevs_discovered": 3, 00:27:32.981 "num_base_bdevs_operational": 3, 00:27:32.981 "base_bdevs_list": [ 00:27:32.981 { 00:27:32.981 "name": null, 00:27:32.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.981 "is_configured": false, 00:27:32.981 "data_offset": 2048, 00:27:32.981 "data_size": 63488 00:27:32.981 }, 00:27:32.981 { 00:27:32.981 "name": "BaseBdev2", 00:27:32.981 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:32.981 "is_configured": true, 00:27:32.981 "data_offset": 2048, 00:27:32.981 "data_size": 63488 00:27:32.981 }, 00:27:32.981 { 00:27:32.981 "name": "BaseBdev3", 00:27:32.981 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:32.981 "is_configured": true, 00:27:32.981 "data_offset": 2048, 00:27:32.981 "data_size": 63488 00:27:32.981 }, 00:27:32.981 { 00:27:32.981 "name": "BaseBdev4", 00:27:32.981 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:32.981 "is_configured": true, 00:27:32.981 "data_offset": 2048, 00:27:32.981 "data_size": 63488 00:27:32.981 } 00:27:32.981 ] 00:27:32.981 }' 00:27:32.981 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.981 10:37:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:33.810 [2024-07-26 10:37:46.457038] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:33.810 10:37:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:33.810 [2024-07-26 10:37:46.534912] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27920b0 00:27:33.810 [2024-07-26 10:37:46.537136] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:33.810 [2024-07-26 10:37:46.659785] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:34.069 [2024-07-26 10:37:46.778672] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:34.069 [2024-07-26 10:37:46.779293] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:34.328 [2024-07-26 10:37:47.137658] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:34.328 [2024-07-26 10:37:47.138770] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:34.588 [2024-07-26 10:37:47.360979] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:34.588 [2024-07-26 10:37:47.361211] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.847 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:35.107 "name": "raid_bdev1", 00:27:35.107 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:35.107 "strip_size_kb": 0, 00:27:35.107 "state": "online", 00:27:35.107 "raid_level": "raid1", 00:27:35.107 "superblock": true, 00:27:35.107 "num_base_bdevs": 4, 00:27:35.107 "num_base_bdevs_discovered": 4, 00:27:35.107 "num_base_bdevs_operational": 4, 00:27:35.107 "process": { 00:27:35.107 "type": "rebuild", 00:27:35.107 "target": "spare", 00:27:35.107 "progress": { 00:27:35.107 "blocks": 12288, 00:27:35.107 "percent": 19 00:27:35.107 } 00:27:35.107 }, 00:27:35.107 "base_bdevs_list": [ 00:27:35.107 { 00:27:35.107 "name": "spare", 00:27:35.107 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:35.107 "is_configured": true, 00:27:35.107 "data_offset": 2048, 00:27:35.107 "data_size": 63488 00:27:35.107 }, 00:27:35.107 { 00:27:35.107 "name": "BaseBdev2", 00:27:35.107 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:35.107 "is_configured": true, 00:27:35.107 "data_offset": 2048, 00:27:35.107 "data_size": 63488 00:27:35.107 }, 00:27:35.107 { 00:27:35.107 "name": "BaseBdev3", 00:27:35.107 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:35.107 "is_configured": true, 00:27:35.107 "data_offset": 2048, 00:27:35.107 "data_size": 63488 00:27:35.107 }, 00:27:35.107 { 00:27:35.107 "name": "BaseBdev4", 00:27:35.107 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:35.107 "is_configured": true, 00:27:35.107 "data_offset": 2048, 00:27:35.107 "data_size": 63488 00:27:35.107 } 00:27:35.107 ] 00:27:35.107 }' 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:35.107 10:37:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:35.107 [2024-07-26 10:37:47.875046] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:35.107 [2024-07-26 10:37:47.875645] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:35.367 [2024-07-26 10:37:48.043216] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.367 [2024-07-26 10:37:48.103408] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:35.367 [2024-07-26 10:37:48.124649] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.367 [2024-07-26 10:37:48.124679] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:35.367 [2024-07-26 10:37:48.124688] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:35.367 [2024-07-26 10:37:48.130176] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x27447d0 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.367 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.626 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.626 "name": "raid_bdev1", 00:27:35.626 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:35.626 "strip_size_kb": 0, 00:27:35.626 "state": "online", 00:27:35.626 "raid_level": "raid1", 00:27:35.626 "superblock": true, 00:27:35.626 "num_base_bdevs": 4, 00:27:35.626 "num_base_bdevs_discovered": 3, 00:27:35.626 "num_base_bdevs_operational": 3, 00:27:35.626 "base_bdevs_list": [ 00:27:35.626 { 00:27:35.626 "name": null, 00:27:35.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.626 "is_configured": false, 00:27:35.626 "data_offset": 2048, 00:27:35.626 "data_size": 63488 00:27:35.626 }, 00:27:35.626 { 00:27:35.626 "name": "BaseBdev2", 00:27:35.626 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:35.626 "is_configured": true, 00:27:35.626 "data_offset": 2048, 00:27:35.626 "data_size": 63488 00:27:35.626 }, 00:27:35.626 { 00:27:35.626 "name": "BaseBdev3", 00:27:35.626 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:35.626 "is_configured": true, 00:27:35.626 "data_offset": 2048, 00:27:35.626 "data_size": 63488 00:27:35.626 }, 00:27:35.626 { 00:27:35.626 "name": "BaseBdev4", 00:27:35.626 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:35.626 "is_configured": true, 00:27:35.626 "data_offset": 2048, 00:27:35.626 "data_size": 63488 00:27:35.626 } 00:27:35.626 ] 00:27:35.626 }' 00:27:35.626 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.626 10:37:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.195 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.453 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:36.453 "name": "raid_bdev1", 00:27:36.453 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:36.453 "strip_size_kb": 0, 00:27:36.453 "state": "online", 00:27:36.453 "raid_level": "raid1", 00:27:36.453 "superblock": true, 00:27:36.453 "num_base_bdevs": 4, 00:27:36.453 "num_base_bdevs_discovered": 3, 00:27:36.453 "num_base_bdevs_operational": 3, 00:27:36.453 "base_bdevs_list": [ 00:27:36.453 { 00:27:36.453 "name": null, 00:27:36.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.453 "is_configured": false, 00:27:36.453 "data_offset": 2048, 00:27:36.453 "data_size": 63488 00:27:36.454 }, 00:27:36.454 { 00:27:36.454 "name": "BaseBdev2", 00:27:36.454 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:36.454 "is_configured": true, 00:27:36.454 "data_offset": 2048, 00:27:36.454 "data_size": 63488 00:27:36.454 }, 00:27:36.454 { 00:27:36.454 "name": "BaseBdev3", 00:27:36.454 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:36.454 "is_configured": true, 00:27:36.454 "data_offset": 2048, 00:27:36.454 "data_size": 63488 00:27:36.454 }, 00:27:36.454 { 00:27:36.454 "name": "BaseBdev4", 00:27:36.454 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:36.454 "is_configured": true, 00:27:36.454 "data_offset": 2048, 00:27:36.454 "data_size": 63488 00:27:36.454 } 00:27:36.454 ] 00:27:36.454 }' 00:27:36.454 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:36.454 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:36.454 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:36.454 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:36.454 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:36.713 [2024-07-26 10:37:49.548261] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.713 10:37:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:36.972 [2024-07-26 10:37:49.616695] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2717cc0 00:27:36.972 [2024-07-26 10:37:49.618076] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:36.972 [2024-07-26 10:37:49.737194] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:36.972 [2024-07-26 10:37:49.737473] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:37.232 [2024-07-26 10:37:49.948579] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:37.232 [2024-07-26 10:37:49.948729] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:37.491 [2024-07-26 10:37:50.187987] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:37.750 [2024-07-26 10:37:50.415119] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:37.750 [2024-07-26 10:37:50.415282] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.750 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.068 [2024-07-26 10:37:50.665463] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:38.068 [2024-07-26 10:37:50.802750] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:38.068 [2024-07-26 10:37:50.803335] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.068 "name": "raid_bdev1", 00:27:38.068 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:38.068 "strip_size_kb": 0, 00:27:38.068 "state": "online", 00:27:38.068 "raid_level": "raid1", 00:27:38.068 "superblock": true, 00:27:38.068 "num_base_bdevs": 4, 00:27:38.068 "num_base_bdevs_discovered": 4, 00:27:38.068 "num_base_bdevs_operational": 4, 00:27:38.068 "process": { 00:27:38.068 "type": "rebuild", 00:27:38.068 "target": "spare", 00:27:38.068 "progress": { 00:27:38.068 "blocks": 16384, 00:27:38.068 "percent": 25 00:27:38.068 } 00:27:38.068 }, 00:27:38.068 "base_bdevs_list": [ 00:27:38.068 { 00:27:38.068 "name": "spare", 00:27:38.068 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:38.068 "is_configured": true, 00:27:38.068 "data_offset": 2048, 00:27:38.068 "data_size": 63488 00:27:38.068 }, 00:27:38.068 { 00:27:38.068 "name": "BaseBdev2", 00:27:38.068 "uuid": "f98d92a7-659a-5571-82e5-e9ba5ead9ada", 00:27:38.068 "is_configured": true, 00:27:38.068 "data_offset": 2048, 00:27:38.068 "data_size": 63488 00:27:38.068 }, 00:27:38.068 { 00:27:38.068 "name": "BaseBdev3", 00:27:38.068 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:38.068 "is_configured": true, 00:27:38.068 "data_offset": 2048, 00:27:38.068 "data_size": 63488 00:27:38.068 }, 00:27:38.068 { 00:27:38.068 "name": "BaseBdev4", 00:27:38.068 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:38.068 "is_configured": true, 00:27:38.068 "data_offset": 2048, 00:27:38.068 "data_size": 63488 00:27:38.068 } 00:27:38.068 ] 00:27:38.068 }' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:38.068 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:27:38.068 10:37:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:38.336 [2024-07-26 10:37:51.138164] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:38.336 [2024-07-26 10:37:51.163446] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:38.593 [2024-07-26 10:37:51.380518] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x27447d0 00:27:38.593 [2024-07-26 10:37:51.380544] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2717cc0 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.593 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.851 "name": "raid_bdev1", 00:27:38.851 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:38.851 "strip_size_kb": 0, 00:27:38.851 "state": "online", 00:27:38.851 "raid_level": "raid1", 00:27:38.851 "superblock": true, 00:27:38.851 "num_base_bdevs": 4, 00:27:38.851 "num_base_bdevs_discovered": 3, 00:27:38.851 "num_base_bdevs_operational": 3, 00:27:38.851 "process": { 00:27:38.851 "type": "rebuild", 00:27:38.851 "target": "spare", 00:27:38.851 "progress": { 00:27:38.851 "blocks": 24576, 00:27:38.851 "percent": 38 00:27:38.851 } 00:27:38.851 }, 00:27:38.851 "base_bdevs_list": [ 00:27:38.851 { 00:27:38.851 "name": "spare", 00:27:38.851 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:38.851 "is_configured": true, 00:27:38.851 "data_offset": 2048, 00:27:38.851 "data_size": 63488 00:27:38.851 }, 00:27:38.851 { 00:27:38.851 "name": null, 00:27:38.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.851 "is_configured": false, 00:27:38.851 "data_offset": 2048, 00:27:38.851 "data_size": 63488 00:27:38.851 }, 00:27:38.851 { 00:27:38.851 "name": "BaseBdev3", 00:27:38.851 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:38.851 "is_configured": true, 00:27:38.851 "data_offset": 2048, 00:27:38.851 "data_size": 63488 00:27:38.851 }, 00:27:38.851 { 00:27:38.851 "name": "BaseBdev4", 00:27:38.851 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:38.851 "is_configured": true, 00:27:38.851 "data_offset": 2048, 00:27:38.851 "data_size": 63488 00:27:38.851 } 00:27:38.851 ] 00:27:38.851 }' 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=915 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.851 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.110 [2024-07-26 10:37:51.821245] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:39.110 [2024-07-26 10:37:51.821633] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:39.110 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.110 "name": "raid_bdev1", 00:27:39.110 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:39.110 "strip_size_kb": 0, 00:27:39.110 "state": "online", 00:27:39.110 "raid_level": "raid1", 00:27:39.110 "superblock": true, 00:27:39.110 "num_base_bdevs": 4, 00:27:39.110 "num_base_bdevs_discovered": 3, 00:27:39.110 "num_base_bdevs_operational": 3, 00:27:39.110 "process": { 00:27:39.110 "type": "rebuild", 00:27:39.110 "target": "spare", 00:27:39.110 "progress": { 00:27:39.110 "blocks": 28672, 00:27:39.110 "percent": 45 00:27:39.110 } 00:27:39.110 }, 00:27:39.110 "base_bdevs_list": [ 00:27:39.110 { 00:27:39.110 "name": "spare", 00:27:39.110 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:39.110 "is_configured": true, 00:27:39.110 "data_offset": 2048, 00:27:39.110 "data_size": 63488 00:27:39.110 }, 00:27:39.110 { 00:27:39.110 "name": null, 00:27:39.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.110 "is_configured": false, 00:27:39.110 "data_offset": 2048, 00:27:39.110 "data_size": 63488 00:27:39.110 }, 00:27:39.110 { 00:27:39.110 "name": "BaseBdev3", 00:27:39.110 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:39.110 "is_configured": true, 00:27:39.110 "data_offset": 2048, 00:27:39.110 "data_size": 63488 00:27:39.110 }, 00:27:39.110 { 00:27:39.110 "name": "BaseBdev4", 00:27:39.110 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:39.110 "is_configured": true, 00:27:39.110 "data_offset": 2048, 00:27:39.110 "data_size": 63488 00:27:39.110 } 00:27:39.110 ] 00:27:39.110 }' 00:27:39.110 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.110 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.110 10:37:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.369 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.369 10:37:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:39.369 [2024-07-26 10:37:52.235630] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:39.369 [2024-07-26 10:37:52.236021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:39.936 [2024-07-26 10:37:52.547875] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.195 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.195 [2024-07-26 10:37:53.080373] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:40.453 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.454 "name": "raid_bdev1", 00:27:40.454 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:40.454 "strip_size_kb": 0, 00:27:40.454 "state": "online", 00:27:40.454 "raid_level": "raid1", 00:27:40.454 "superblock": true, 00:27:40.454 "num_base_bdevs": 4, 00:27:40.454 "num_base_bdevs_discovered": 3, 00:27:40.454 "num_base_bdevs_operational": 3, 00:27:40.454 "process": { 00:27:40.454 "type": "rebuild", 00:27:40.454 "target": "spare", 00:27:40.454 "progress": { 00:27:40.454 "blocks": 49152, 00:27:40.454 "percent": 77 00:27:40.454 } 00:27:40.454 }, 00:27:40.454 "base_bdevs_list": [ 00:27:40.454 { 00:27:40.454 "name": "spare", 00:27:40.454 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:40.454 "is_configured": true, 00:27:40.454 "data_offset": 2048, 00:27:40.454 "data_size": 63488 00:27:40.454 }, 00:27:40.454 { 00:27:40.454 "name": null, 00:27:40.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.454 "is_configured": false, 00:27:40.454 "data_offset": 2048, 00:27:40.454 "data_size": 63488 00:27:40.454 }, 00:27:40.454 { 00:27:40.454 "name": "BaseBdev3", 00:27:40.454 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:40.454 "is_configured": true, 00:27:40.454 "data_offset": 2048, 00:27:40.454 "data_size": 63488 00:27:40.454 }, 00:27:40.454 { 00:27:40.454 "name": "BaseBdev4", 00:27:40.454 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:40.454 "is_configured": true, 00:27:40.454 "data_offset": 2048, 00:27:40.454 "data_size": 63488 00:27:40.454 } 00:27:40.454 ] 00:27:40.454 }' 00:27:40.454 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:40.454 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.454 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:40.454 [2024-07-26 10:37:53.325545] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:27:40.713 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.713 10:37:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:41.278 [2024-07-26 10:37:54.090513] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:41.535 [2024-07-26 10:37:54.198121] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:41.536 [2024-07-26 10:37:54.199572] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.536 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.793 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.793 "name": "raid_bdev1", 00:27:41.793 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:41.793 "strip_size_kb": 0, 00:27:41.793 "state": "online", 00:27:41.793 "raid_level": "raid1", 00:27:41.793 "superblock": true, 00:27:41.793 "num_base_bdevs": 4, 00:27:41.794 "num_base_bdevs_discovered": 3, 00:27:41.794 "num_base_bdevs_operational": 3, 00:27:41.794 "base_bdevs_list": [ 00:27:41.794 { 00:27:41.794 "name": "spare", 00:27:41.794 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:41.794 "is_configured": true, 00:27:41.794 "data_offset": 2048, 00:27:41.794 "data_size": 63488 00:27:41.794 }, 00:27:41.794 { 00:27:41.794 "name": null, 00:27:41.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.794 "is_configured": false, 00:27:41.794 "data_offset": 2048, 00:27:41.794 "data_size": 63488 00:27:41.794 }, 00:27:41.794 { 00:27:41.794 "name": "BaseBdev3", 00:27:41.794 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:41.794 "is_configured": true, 00:27:41.794 "data_offset": 2048, 00:27:41.794 "data_size": 63488 00:27:41.794 }, 00:27:41.794 { 00:27:41.794 "name": "BaseBdev4", 00:27:41.794 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:41.794 "is_configured": true, 00:27:41.794 "data_offset": 2048, 00:27:41.794 "data_size": 63488 00:27:41.794 } 00:27:41.794 ] 00:27:41.794 }' 00:27:41.794 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.794 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:41.794 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.051 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:42.051 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:27:42.051 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.051 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.052 "name": "raid_bdev1", 00:27:42.052 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:42.052 "strip_size_kb": 0, 00:27:42.052 "state": "online", 00:27:42.052 "raid_level": "raid1", 00:27:42.052 "superblock": true, 00:27:42.052 "num_base_bdevs": 4, 00:27:42.052 "num_base_bdevs_discovered": 3, 00:27:42.052 "num_base_bdevs_operational": 3, 00:27:42.052 "base_bdevs_list": [ 00:27:42.052 { 00:27:42.052 "name": "spare", 00:27:42.052 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:42.052 "is_configured": true, 00:27:42.052 "data_offset": 2048, 00:27:42.052 "data_size": 63488 00:27:42.052 }, 00:27:42.052 { 00:27:42.052 "name": null, 00:27:42.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.052 "is_configured": false, 00:27:42.052 "data_offset": 2048, 00:27:42.052 "data_size": 63488 00:27:42.052 }, 00:27:42.052 { 00:27:42.052 "name": "BaseBdev3", 00:27:42.052 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:42.052 "is_configured": true, 00:27:42.052 "data_offset": 2048, 00:27:42.052 "data_size": 63488 00:27:42.052 }, 00:27:42.052 { 00:27:42.052 "name": "BaseBdev4", 00:27:42.052 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:42.052 "is_configured": true, 00:27:42.052 "data_offset": 2048, 00:27:42.052 "data_size": 63488 00:27:42.052 } 00:27:42.052 ] 00:27:42.052 }' 00:27:42.052 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.310 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:42.310 10:37:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.310 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.568 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.568 "name": "raid_bdev1", 00:27:42.568 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:42.568 "strip_size_kb": 0, 00:27:42.568 "state": "online", 00:27:42.568 "raid_level": "raid1", 00:27:42.568 "superblock": true, 00:27:42.568 "num_base_bdevs": 4, 00:27:42.568 "num_base_bdevs_discovered": 3, 00:27:42.568 "num_base_bdevs_operational": 3, 00:27:42.568 "base_bdevs_list": [ 00:27:42.568 { 00:27:42.568 "name": "spare", 00:27:42.568 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:42.568 "is_configured": true, 00:27:42.568 "data_offset": 2048, 00:27:42.568 "data_size": 63488 00:27:42.568 }, 00:27:42.568 { 00:27:42.568 "name": null, 00:27:42.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.568 "is_configured": false, 00:27:42.568 "data_offset": 2048, 00:27:42.568 "data_size": 63488 00:27:42.568 }, 00:27:42.568 { 00:27:42.568 "name": "BaseBdev3", 00:27:42.568 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:42.568 "is_configured": true, 00:27:42.568 "data_offset": 2048, 00:27:42.568 "data_size": 63488 00:27:42.568 }, 00:27:42.568 { 00:27:42.568 "name": "BaseBdev4", 00:27:42.568 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:42.568 "is_configured": true, 00:27:42.568 "data_offset": 2048, 00:27:42.568 "data_size": 63488 00:27:42.568 } 00:27:42.568 ] 00:27:42.568 }' 00:27:42.568 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.568 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:43.133 10:37:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:43.133 [2024-07-26 10:37:56.023717] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:43.133 [2024-07-26 10:37:56.023750] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.391 00:27:43.391 Latency(us) 00:27:43.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.391 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:43.391 raid_bdev1 : 10.83 103.16 309.49 0.00 0.00 12923.89 270.34 120795.96 00:27:43.391 =================================================================================================================== 00:27:43.391 Total : 103.16 309.49 0.00 0.00 12923.89 270.34 120795.96 00:27:43.391 [2024-07-26 10:37:56.071580] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.391 [2024-07-26 10:37:56.071608] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.391 [2024-07-26 10:37:56.071694] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.391 [2024-07-26 10:37:56.071705] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x273f840 name raid_bdev1, state offline 00:27:43.391 0 00:27:43.391 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.391 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:43.648 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:43.649 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:43.906 /dev/nbd0 00:27:43.906 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:43.906 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:43.907 1+0 records in 00:27:43.907 1+0 records out 00:27:43.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024858 s, 16.5 MB/s 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:43.907 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:27:44.165 /dev/nbd1 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:44.165 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.166 1+0 records in 00:27:44.166 1+0 records out 00:27:44.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025216 s, 16.2 MB/s 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.166 10:37:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:44.424 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:27:44.682 /dev/nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.682 1+0 records in 00:27:44.682 1+0 records out 00:27:44.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271973 s, 15.1 MB/s 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.682 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:44.940 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:44.940 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:44.940 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.941 10:37:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:45.199 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:45.456 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:45.714 [2024-07-26 10:37:58.472049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:45.714 [2024-07-26 10:37:58.472093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.714 [2024-07-26 10:37:58.472111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x273f840 00:27:45.714 [2024-07-26 10:37:58.472123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.714 [2024-07-26 10:37:58.473621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.714 [2024-07-26 10:37:58.473649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:45.714 [2024-07-26 10:37:58.473720] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:45.714 [2024-07-26 10:37:58.473746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:45.714 [2024-07-26 10:37:58.473850] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:45.714 [2024-07-26 10:37:58.473920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:45.714 spare 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.714 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.714 [2024-07-26 10:37:58.574232] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x27417a0 00:27:45.714 [2024-07-26 10:37:58.574250] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:45.714 [2024-07-26 10:37:58.574431] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x278f590 00:27:45.714 [2024-07-26 10:37:58.574570] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27417a0 00:27:45.714 [2024-07-26 10:37:58.574579] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27417a0 00:27:45.714 [2024-07-26 10:37:58.574677] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.972 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.972 "name": "raid_bdev1", 00:27:45.972 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:45.972 "strip_size_kb": 0, 00:27:45.972 "state": "online", 00:27:45.972 "raid_level": "raid1", 00:27:45.972 "superblock": true, 00:27:45.972 "num_base_bdevs": 4, 00:27:45.972 "num_base_bdevs_discovered": 3, 00:27:45.972 "num_base_bdevs_operational": 3, 00:27:45.972 "base_bdevs_list": [ 00:27:45.972 { 00:27:45.972 "name": "spare", 00:27:45.972 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:45.972 "is_configured": true, 00:27:45.973 "data_offset": 2048, 00:27:45.973 "data_size": 63488 00:27:45.973 }, 00:27:45.973 { 00:27:45.973 "name": null, 00:27:45.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.973 "is_configured": false, 00:27:45.973 "data_offset": 2048, 00:27:45.973 "data_size": 63488 00:27:45.973 }, 00:27:45.973 { 00:27:45.973 "name": "BaseBdev3", 00:27:45.973 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:45.973 "is_configured": true, 00:27:45.973 "data_offset": 2048, 00:27:45.973 "data_size": 63488 00:27:45.973 }, 00:27:45.973 { 00:27:45.973 "name": "BaseBdev4", 00:27:45.973 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:45.973 "is_configured": true, 00:27:45.973 "data_offset": 2048, 00:27:45.973 "data_size": 63488 00:27:45.973 } 00:27:45.973 ] 00:27:45.973 }' 00:27:45.973 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.973 10:37:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.539 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.796 "name": "raid_bdev1", 00:27:46.796 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:46.796 "strip_size_kb": 0, 00:27:46.796 "state": "online", 00:27:46.796 "raid_level": "raid1", 00:27:46.796 "superblock": true, 00:27:46.796 "num_base_bdevs": 4, 00:27:46.796 "num_base_bdevs_discovered": 3, 00:27:46.796 "num_base_bdevs_operational": 3, 00:27:46.796 "base_bdevs_list": [ 00:27:46.796 { 00:27:46.796 "name": "spare", 00:27:46.796 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:46.796 "is_configured": true, 00:27:46.796 "data_offset": 2048, 00:27:46.796 "data_size": 63488 00:27:46.796 }, 00:27:46.796 { 00:27:46.796 "name": null, 00:27:46.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.796 "is_configured": false, 00:27:46.796 "data_offset": 2048, 00:27:46.796 "data_size": 63488 00:27:46.796 }, 00:27:46.796 { 00:27:46.796 "name": "BaseBdev3", 00:27:46.796 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:46.796 "is_configured": true, 00:27:46.796 "data_offset": 2048, 00:27:46.796 "data_size": 63488 00:27:46.796 }, 00:27:46.796 { 00:27:46.796 "name": "BaseBdev4", 00:27:46.796 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:46.796 "is_configured": true, 00:27:46.796 "data_offset": 2048, 00:27:46.796 "data_size": 63488 00:27:46.796 } 00:27:46.796 ] 00:27:46.796 }' 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.796 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:47.054 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.054 10:37:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:47.311 [2024-07-26 10:38:00.040474] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.311 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.568 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.568 "name": "raid_bdev1", 00:27:47.568 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:47.568 "strip_size_kb": 0, 00:27:47.568 "state": "online", 00:27:47.569 "raid_level": "raid1", 00:27:47.569 "superblock": true, 00:27:47.569 "num_base_bdevs": 4, 00:27:47.569 "num_base_bdevs_discovered": 2, 00:27:47.569 "num_base_bdevs_operational": 2, 00:27:47.569 "base_bdevs_list": [ 00:27:47.569 { 00:27:47.569 "name": null, 00:27:47.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.569 "is_configured": false, 00:27:47.569 "data_offset": 2048, 00:27:47.569 "data_size": 63488 00:27:47.569 }, 00:27:47.569 { 00:27:47.569 "name": null, 00:27:47.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.569 "is_configured": false, 00:27:47.569 "data_offset": 2048, 00:27:47.569 "data_size": 63488 00:27:47.569 }, 00:27:47.569 { 00:27:47.569 "name": "BaseBdev3", 00:27:47.569 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:47.569 "is_configured": true, 00:27:47.569 "data_offset": 2048, 00:27:47.569 "data_size": 63488 00:27:47.569 }, 00:27:47.569 { 00:27:47.569 "name": "BaseBdev4", 00:27:47.569 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:47.569 "is_configured": true, 00:27:47.569 "data_offset": 2048, 00:27:47.569 "data_size": 63488 00:27:47.569 } 00:27:47.569 ] 00:27:47.569 }' 00:27:47.569 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.569 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:48.135 10:38:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:48.394 [2024-07-26 10:38:01.063296] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:48.394 [2024-07-26 10:38:01.063436] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:48.394 [2024-07-26 10:38:01.063451] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:48.394 [2024-07-26 10:38:01.063478] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:48.394 [2024-07-26 10:38:01.067633] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2741670 00:27:48.394 [2024-07-26 10:38:01.069504] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:48.394 10:38:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.331 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.590 "name": "raid_bdev1", 00:27:49.590 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:49.590 "strip_size_kb": 0, 00:27:49.590 "state": "online", 00:27:49.590 "raid_level": "raid1", 00:27:49.590 "superblock": true, 00:27:49.590 "num_base_bdevs": 4, 00:27:49.590 "num_base_bdevs_discovered": 3, 00:27:49.590 "num_base_bdevs_operational": 3, 00:27:49.590 "process": { 00:27:49.590 "type": "rebuild", 00:27:49.590 "target": "spare", 00:27:49.590 "progress": { 00:27:49.590 "blocks": 24576, 00:27:49.590 "percent": 38 00:27:49.590 } 00:27:49.590 }, 00:27:49.590 "base_bdevs_list": [ 00:27:49.590 { 00:27:49.590 "name": "spare", 00:27:49.590 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:49.590 "is_configured": true, 00:27:49.590 "data_offset": 2048, 00:27:49.590 "data_size": 63488 00:27:49.590 }, 00:27:49.590 { 00:27:49.590 "name": null, 00:27:49.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.590 "is_configured": false, 00:27:49.590 "data_offset": 2048, 00:27:49.590 "data_size": 63488 00:27:49.590 }, 00:27:49.590 { 00:27:49.590 "name": "BaseBdev3", 00:27:49.590 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:49.590 "is_configured": true, 00:27:49.590 "data_offset": 2048, 00:27:49.590 "data_size": 63488 00:27:49.590 }, 00:27:49.590 { 00:27:49.590 "name": "BaseBdev4", 00:27:49.590 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:49.590 "is_configured": true, 00:27:49.590 "data_offset": 2048, 00:27:49.590 "data_size": 63488 00:27:49.590 } 00:27:49.590 ] 00:27:49.590 }' 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.590 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:49.850 [2024-07-26 10:38:02.628688] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:49.850 [2024-07-26 10:38:02.681205] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:49.850 [2024-07-26 10:38:02.681246] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.850 [2024-07-26 10:38:02.681261] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:49.850 [2024-07-26 10:38:02.681268] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.850 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.109 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.109 "name": "raid_bdev1", 00:27:50.109 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:50.109 "strip_size_kb": 0, 00:27:50.109 "state": "online", 00:27:50.109 "raid_level": "raid1", 00:27:50.109 "superblock": true, 00:27:50.109 "num_base_bdevs": 4, 00:27:50.109 "num_base_bdevs_discovered": 2, 00:27:50.109 "num_base_bdevs_operational": 2, 00:27:50.109 "base_bdevs_list": [ 00:27:50.109 { 00:27:50.109 "name": null, 00:27:50.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.109 "is_configured": false, 00:27:50.109 "data_offset": 2048, 00:27:50.109 "data_size": 63488 00:27:50.109 }, 00:27:50.109 { 00:27:50.109 "name": null, 00:27:50.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.109 "is_configured": false, 00:27:50.109 "data_offset": 2048, 00:27:50.109 "data_size": 63488 00:27:50.109 }, 00:27:50.109 { 00:27:50.109 "name": "BaseBdev3", 00:27:50.109 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:50.109 "is_configured": true, 00:27:50.109 "data_offset": 2048, 00:27:50.109 "data_size": 63488 00:27:50.109 }, 00:27:50.109 { 00:27:50.109 "name": "BaseBdev4", 00:27:50.109 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:50.109 "is_configured": true, 00:27:50.109 "data_offset": 2048, 00:27:50.109 "data_size": 63488 00:27:50.109 } 00:27:50.109 ] 00:27:50.109 }' 00:27:50.109 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.109 10:38:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:50.673 10:38:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:50.931 [2024-07-26 10:38:03.679945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:50.931 [2024-07-26 10:38:03.679993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.931 [2024-07-26 10:38:03.680013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2713450 00:27:50.931 [2024-07-26 10:38:03.680030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.931 [2024-07-26 10:38:03.680392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.931 [2024-07-26 10:38:03.680409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:50.931 [2024-07-26 10:38:03.680487] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:50.931 [2024-07-26 10:38:03.680499] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:27:50.931 [2024-07-26 10:38:03.680509] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:50.931 [2024-07-26 10:38:03.680527] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:50.931 [2024-07-26 10:38:03.684675] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x278f4e0 00:27:50.931 spare 00:27:50.931 [2024-07-26 10:38:03.686032] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:50.931 10:38:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.869 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.129 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.129 "name": "raid_bdev1", 00:27:52.129 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:52.129 "strip_size_kb": 0, 00:27:52.129 "state": "online", 00:27:52.129 "raid_level": "raid1", 00:27:52.129 "superblock": true, 00:27:52.129 "num_base_bdevs": 4, 00:27:52.129 "num_base_bdevs_discovered": 3, 00:27:52.129 "num_base_bdevs_operational": 3, 00:27:52.129 "process": { 00:27:52.129 "type": "rebuild", 00:27:52.129 "target": "spare", 00:27:52.129 "progress": { 00:27:52.129 "blocks": 24576, 00:27:52.129 "percent": 38 00:27:52.129 } 00:27:52.129 }, 00:27:52.129 "base_bdevs_list": [ 00:27:52.129 { 00:27:52.129 "name": "spare", 00:27:52.129 "uuid": "509c9c0e-d4c5-5492-b6f4-5284951fefb6", 00:27:52.129 "is_configured": true, 00:27:52.129 "data_offset": 2048, 00:27:52.129 "data_size": 63488 00:27:52.129 }, 00:27:52.129 { 00:27:52.129 "name": null, 00:27:52.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.129 "is_configured": false, 00:27:52.129 "data_offset": 2048, 00:27:52.129 "data_size": 63488 00:27:52.129 }, 00:27:52.129 { 00:27:52.129 "name": "BaseBdev3", 00:27:52.129 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:52.129 "is_configured": true, 00:27:52.129 "data_offset": 2048, 00:27:52.129 "data_size": 63488 00:27:52.129 }, 00:27:52.129 { 00:27:52.129 "name": "BaseBdev4", 00:27:52.129 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:52.129 "is_configured": true, 00:27:52.129 "data_offset": 2048, 00:27:52.129 "data_size": 63488 00:27:52.129 } 00:27:52.129 ] 00:27:52.129 }' 00:27:52.129 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.129 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.129 10:38:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.419 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.419 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:52.419 [2024-07-26 10:38:05.241695] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.419 [2024-07-26 10:38:05.297800] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:52.419 [2024-07-26 10:38:05.297840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.419 [2024-07-26 10:38:05.297861] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.419 [2024-07-26 10:38:05.297869] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.677 "name": "raid_bdev1", 00:27:52.677 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:52.677 "strip_size_kb": 0, 00:27:52.677 "state": "online", 00:27:52.677 "raid_level": "raid1", 00:27:52.677 "superblock": true, 00:27:52.677 "num_base_bdevs": 4, 00:27:52.677 "num_base_bdevs_discovered": 2, 00:27:52.677 "num_base_bdevs_operational": 2, 00:27:52.677 "base_bdevs_list": [ 00:27:52.677 { 00:27:52.677 "name": null, 00:27:52.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.677 "is_configured": false, 00:27:52.677 "data_offset": 2048, 00:27:52.677 "data_size": 63488 00:27:52.677 }, 00:27:52.677 { 00:27:52.677 "name": null, 00:27:52.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.677 "is_configured": false, 00:27:52.677 "data_offset": 2048, 00:27:52.677 "data_size": 63488 00:27:52.677 }, 00:27:52.677 { 00:27:52.677 "name": "BaseBdev3", 00:27:52.677 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:52.677 "is_configured": true, 00:27:52.677 "data_offset": 2048, 00:27:52.677 "data_size": 63488 00:27:52.677 }, 00:27:52.677 { 00:27:52.677 "name": "BaseBdev4", 00:27:52.677 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:52.677 "is_configured": true, 00:27:52.677 "data_offset": 2048, 00:27:52.677 "data_size": 63488 00:27:52.677 } 00:27:52.677 ] 00:27:52.677 }' 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.677 10:38:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.245 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.504 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:53.504 "name": "raid_bdev1", 00:27:53.504 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:53.504 "strip_size_kb": 0, 00:27:53.504 "state": "online", 00:27:53.504 "raid_level": "raid1", 00:27:53.504 "superblock": true, 00:27:53.504 "num_base_bdevs": 4, 00:27:53.504 "num_base_bdevs_discovered": 2, 00:27:53.504 "num_base_bdevs_operational": 2, 00:27:53.504 "base_bdevs_list": [ 00:27:53.504 { 00:27:53.504 "name": null, 00:27:53.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.504 "is_configured": false, 00:27:53.504 "data_offset": 2048, 00:27:53.504 "data_size": 63488 00:27:53.504 }, 00:27:53.504 { 00:27:53.504 "name": null, 00:27:53.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.504 "is_configured": false, 00:27:53.504 "data_offset": 2048, 00:27:53.504 "data_size": 63488 00:27:53.504 }, 00:27:53.504 { 00:27:53.504 "name": "BaseBdev3", 00:27:53.504 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:53.504 "is_configured": true, 00:27:53.504 "data_offset": 2048, 00:27:53.504 "data_size": 63488 00:27:53.504 }, 00:27:53.504 { 00:27:53.504 "name": "BaseBdev4", 00:27:53.504 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:53.504 "is_configured": true, 00:27:53.504 "data_offset": 2048, 00:27:53.504 "data_size": 63488 00:27:53.504 } 00:27:53.504 ] 00:27:53.504 }' 00:27:53.504 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:53.763 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:53.763 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:53.763 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:53.763 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:54.022 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:54.022 [2024-07-26 10:38:06.902283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:54.022 [2024-07-26 10:38:06.902330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.022 [2024-07-26 10:38:06.902350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x278f410 00:27:54.022 [2024-07-26 10:38:06.902362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.022 [2024-07-26 10:38:06.902689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.022 [2024-07-26 10:38:06.902704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:54.022 [2024-07-26 10:38:06.902762] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:54.022 [2024-07-26 10:38:06.902774] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:54.022 [2024-07-26 10:38:06.902783] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:54.022 BaseBdev1 00:27:54.022 10:38:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:55.396 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.397 10:38:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.397 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.397 "name": "raid_bdev1", 00:27:55.397 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:55.397 "strip_size_kb": 0, 00:27:55.397 "state": "online", 00:27:55.397 "raid_level": "raid1", 00:27:55.397 "superblock": true, 00:27:55.397 "num_base_bdevs": 4, 00:27:55.397 "num_base_bdevs_discovered": 2, 00:27:55.397 "num_base_bdevs_operational": 2, 00:27:55.397 "base_bdevs_list": [ 00:27:55.397 { 00:27:55.397 "name": null, 00:27:55.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.397 "is_configured": false, 00:27:55.397 "data_offset": 2048, 00:27:55.397 "data_size": 63488 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": null, 00:27:55.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.397 "is_configured": false, 00:27:55.397 "data_offset": 2048, 00:27:55.397 "data_size": 63488 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": "BaseBdev3", 00:27:55.397 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:55.397 "is_configured": true, 00:27:55.397 "data_offset": 2048, 00:27:55.397 "data_size": 63488 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": "BaseBdev4", 00:27:55.397 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:55.397 "is_configured": true, 00:27:55.397 "data_offset": 2048, 00:27:55.397 "data_size": 63488 00:27:55.397 } 00:27:55.397 ] 00:27:55.397 }' 00:27:55.397 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.397 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.963 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.222 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.222 "name": "raid_bdev1", 00:27:56.222 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:56.222 "strip_size_kb": 0, 00:27:56.222 "state": "online", 00:27:56.222 "raid_level": "raid1", 00:27:56.222 "superblock": true, 00:27:56.222 "num_base_bdevs": 4, 00:27:56.222 "num_base_bdevs_discovered": 2, 00:27:56.222 "num_base_bdevs_operational": 2, 00:27:56.222 "base_bdevs_list": [ 00:27:56.222 { 00:27:56.222 "name": null, 00:27:56.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.222 "is_configured": false, 00:27:56.222 "data_offset": 2048, 00:27:56.222 "data_size": 63488 00:27:56.222 }, 00:27:56.222 { 00:27:56.222 "name": null, 00:27:56.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.222 "is_configured": false, 00:27:56.222 "data_offset": 2048, 00:27:56.222 "data_size": 63488 00:27:56.222 }, 00:27:56.222 { 00:27:56.222 "name": "BaseBdev3", 00:27:56.222 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:56.222 "is_configured": true, 00:27:56.222 "data_offset": 2048, 00:27:56.222 "data_size": 63488 00:27:56.222 }, 00:27:56.222 { 00:27:56.222 "name": "BaseBdev4", 00:27:56.222 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:56.222 "is_configured": true, 00:27:56.222 "data_offset": 2048, 00:27:56.222 "data_size": 63488 00:27:56.222 } 00:27:56.222 ] 00:27:56.222 }' 00:27:56.222 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.222 10:38:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:56.222 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.480 [2024-07-26 10:38:09.260908] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:56.480 [2024-07-26 10:38:09.261028] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:27:56.480 [2024-07-26 10:38:09.261042] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:56.480 request: 00:27:56.480 { 00:27:56.480 "base_bdev": "BaseBdev1", 00:27:56.480 "raid_bdev": "raid_bdev1", 00:27:56.480 "method": "bdev_raid_add_base_bdev", 00:27:56.480 "req_id": 1 00:27:56.480 } 00:27:56.480 Got JSON-RPC error response 00:27:56.480 response: 00:27:56.480 { 00:27:56.480 "code": -22, 00:27:56.480 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:56.480 } 00:27:56.480 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:27:56.480 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.480 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.480 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.480 10:38:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.415 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.673 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.673 "name": "raid_bdev1", 00:27:57.673 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:57.673 "strip_size_kb": 0, 00:27:57.673 "state": "online", 00:27:57.673 "raid_level": "raid1", 00:27:57.673 "superblock": true, 00:27:57.673 "num_base_bdevs": 4, 00:27:57.673 "num_base_bdevs_discovered": 2, 00:27:57.673 "num_base_bdevs_operational": 2, 00:27:57.673 "base_bdevs_list": [ 00:27:57.673 { 00:27:57.673 "name": null, 00:27:57.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.673 "is_configured": false, 00:27:57.673 "data_offset": 2048, 00:27:57.673 "data_size": 63488 00:27:57.673 }, 00:27:57.673 { 00:27:57.673 "name": null, 00:27:57.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.673 "is_configured": false, 00:27:57.673 "data_offset": 2048, 00:27:57.673 "data_size": 63488 00:27:57.673 }, 00:27:57.673 { 00:27:57.673 "name": "BaseBdev3", 00:27:57.673 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:57.673 "is_configured": true, 00:27:57.673 "data_offset": 2048, 00:27:57.673 "data_size": 63488 00:27:57.673 }, 00:27:57.673 { 00:27:57.673 "name": "BaseBdev4", 00:27:57.673 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:57.673 "is_configured": true, 00:27:57.673 "data_offset": 2048, 00:27:57.673 "data_size": 63488 00:27:57.673 } 00:27:57.673 ] 00:27:57.673 }' 00:27:57.673 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.673 10:38:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.240 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.499 "name": "raid_bdev1", 00:27:58.499 "uuid": "779ff554-a8b0-4fa1-af16-20b3eb481f62", 00:27:58.499 "strip_size_kb": 0, 00:27:58.499 "state": "online", 00:27:58.499 "raid_level": "raid1", 00:27:58.499 "superblock": true, 00:27:58.499 "num_base_bdevs": 4, 00:27:58.499 "num_base_bdevs_discovered": 2, 00:27:58.499 "num_base_bdevs_operational": 2, 00:27:58.499 "base_bdevs_list": [ 00:27:58.499 { 00:27:58.499 "name": null, 00:27:58.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.499 "is_configured": false, 00:27:58.499 "data_offset": 2048, 00:27:58.499 "data_size": 63488 00:27:58.499 }, 00:27:58.499 { 00:27:58.499 "name": null, 00:27:58.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.499 "is_configured": false, 00:27:58.499 "data_offset": 2048, 00:27:58.499 "data_size": 63488 00:27:58.499 }, 00:27:58.499 { 00:27:58.499 "name": "BaseBdev3", 00:27:58.499 "uuid": "9be45fab-08e1-5cfb-9693-c2b37932c127", 00:27:58.499 "is_configured": true, 00:27:58.499 "data_offset": 2048, 00:27:58.499 "data_size": 63488 00:27:58.499 }, 00:27:58.499 { 00:27:58.499 "name": "BaseBdev4", 00:27:58.499 "uuid": "57a5afc8-7cad-5c25-933a-0af36aec9f14", 00:27:58.499 "is_configured": true, 00:27:58.499 "data_offset": 2048, 00:27:58.499 "data_size": 63488 00:27:58.499 } 00:27:58.499 ] 00:27:58.499 }' 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 3505423 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 3505423 ']' 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 3505423 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:27:58.499 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3505423 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3505423' 00:27:58.758 killing process with pid 3505423 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 3505423 00:27:58.758 Received shutdown signal, test time was about 26.169857 seconds 00:27:58.758 00:27:58.758 Latency(us) 00:27:58.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.758 =================================================================================================================== 00:27:58.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.758 [2024-07-26 10:38:11.447611] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:58.758 [2024-07-26 10:38:11.447708] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:58.758 [2024-07-26 10:38:11.447763] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:58.758 [2024-07-26 10:38:11.447774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27417a0 name raid_bdev1, state offline 00:27:58.758 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 3505423 00:27:58.758 [2024-07-26 10:38:11.483193] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:59.017 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:27:59.017 00:27:59.017 real 0m31.494s 00:27:59.017 user 0m49.357s 00:27:59.017 sys 0m4.984s 00:27:59.017 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.017 10:38:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:59.017 ************************************ 00:27:59.017 END TEST raid_rebuild_test_sb_io 00:27:59.017 ************************************ 00:27:59.017 10:38:11 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' n == y ']' 00:27:59.017 10:38:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:27:59.017 10:38:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:59.017 10:38:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:59.017 10:38:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.017 10:38:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:59.017 ************************************ 00:27:59.017 START TEST raid_state_function_test_sb_4k 00:27:59.017 ************************************ 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=3511121 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3511121' 00:27:59.017 Process raid pid: 3511121 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 3511121 /var/tmp/spdk-raid.sock 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 3511121 ']' 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:59.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.017 10:38:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:59.017 [2024-07-26 10:38:11.807361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:27:59.017 [2024-07-26 10:38:11.807417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:59.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.017 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:59.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:59.018 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:59.276 [2024-07-26 10:38:11.942282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.276 [2024-07-26 10:38:11.986844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.276 [2024-07-26 10:38:12.046596] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:59.276 [2024-07-26 10:38:12.046631] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:59.843 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.843 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:27:59.843 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:00.102 [2024-07-26 10:38:12.907040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:00.102 [2024-07-26 10:38:12.907080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:00.102 [2024-07-26 10:38:12.907090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:00.102 [2024-07-26 10:38:12.907101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.102 10:38:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.361 10:38:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.361 "name": "Existed_Raid", 00:28:00.361 "uuid": "533e09ae-ba90-410d-bc39-f6d7cd4b07b3", 00:28:00.361 "strip_size_kb": 0, 00:28:00.361 "state": "configuring", 00:28:00.361 "raid_level": "raid1", 00:28:00.361 "superblock": true, 00:28:00.361 "num_base_bdevs": 2, 00:28:00.361 "num_base_bdevs_discovered": 0, 00:28:00.361 "num_base_bdevs_operational": 2, 00:28:00.361 "base_bdevs_list": [ 00:28:00.361 { 00:28:00.361 "name": "BaseBdev1", 00:28:00.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.361 "is_configured": false, 00:28:00.361 "data_offset": 0, 00:28:00.361 "data_size": 0 00:28:00.361 }, 00:28:00.361 { 00:28:00.361 "name": "BaseBdev2", 00:28:00.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.361 "is_configured": false, 00:28:00.361 "data_offset": 0, 00:28:00.361 "data_size": 0 00:28:00.361 } 00:28:00.361 ] 00:28:00.361 }' 00:28:00.361 10:38:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.361 10:38:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:00.925 10:38:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:01.184 [2024-07-26 10:38:13.941633] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:01.184 [2024-07-26 10:38:13.941663] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26a4ce0 name Existed_Raid, state configuring 00:28:01.184 10:38:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:01.443 [2024-07-26 10:38:14.154213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.443 [2024-07-26 10:38:14.154238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.443 [2024-07-26 10:38:14.154247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.443 [2024-07-26 10:38:14.154257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.443 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:28:01.701 [2024-07-26 10:38:14.388124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.701 BaseBdev1 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:01.701 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:01.960 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:01.960 [ 00:28:01.960 { 00:28:01.960 "name": "BaseBdev1", 00:28:01.960 "aliases": [ 00:28:01.960 "b22889d3-3eb5-4156-b7cf-e03c760db512" 00:28:01.960 ], 00:28:01.960 "product_name": "Malloc disk", 00:28:01.960 "block_size": 4096, 00:28:01.960 "num_blocks": 8192, 00:28:01.960 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:01.960 "assigned_rate_limits": { 00:28:01.960 "rw_ios_per_sec": 0, 00:28:01.960 "rw_mbytes_per_sec": 0, 00:28:01.960 "r_mbytes_per_sec": 0, 00:28:01.960 "w_mbytes_per_sec": 0 00:28:01.960 }, 00:28:01.960 "claimed": true, 00:28:01.960 "claim_type": "exclusive_write", 00:28:01.960 "zoned": false, 00:28:01.960 "supported_io_types": { 00:28:01.960 "read": true, 00:28:01.960 "write": true, 00:28:01.960 "unmap": true, 00:28:01.960 "flush": true, 00:28:01.960 "reset": true, 00:28:01.960 "nvme_admin": false, 00:28:01.960 "nvme_io": false, 00:28:01.960 "nvme_io_md": false, 00:28:01.960 "write_zeroes": true, 00:28:01.960 "zcopy": true, 00:28:01.960 "get_zone_info": false, 00:28:01.960 "zone_management": false, 00:28:01.960 "zone_append": false, 00:28:01.960 "compare": false, 00:28:01.960 "compare_and_write": false, 00:28:01.960 "abort": true, 00:28:01.960 "seek_hole": false, 00:28:01.960 "seek_data": false, 00:28:01.960 "copy": true, 00:28:01.960 "nvme_iov_md": false 00:28:01.960 }, 00:28:01.960 "memory_domains": [ 00:28:01.960 { 00:28:01.960 "dma_device_id": "system", 00:28:01.960 "dma_device_type": 1 00:28:01.960 }, 00:28:01.960 { 00:28:01.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.960 "dma_device_type": 2 00:28:01.960 } 00:28:01.960 ], 00:28:01.960 "driver_specific": {} 00:28:01.960 } 00:28:01.960 ] 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.219 10:38:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.219 10:38:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.219 "name": "Existed_Raid", 00:28:02.219 "uuid": "7c3249d5-be84-4e45-8a92-c5bb1890c965", 00:28:02.219 "strip_size_kb": 0, 00:28:02.219 "state": "configuring", 00:28:02.219 "raid_level": "raid1", 00:28:02.219 "superblock": true, 00:28:02.219 "num_base_bdevs": 2, 00:28:02.219 "num_base_bdevs_discovered": 1, 00:28:02.219 "num_base_bdevs_operational": 2, 00:28:02.219 "base_bdevs_list": [ 00:28:02.219 { 00:28:02.219 "name": "BaseBdev1", 00:28:02.219 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:02.219 "is_configured": true, 00:28:02.219 "data_offset": 256, 00:28:02.219 "data_size": 7936 00:28:02.219 }, 00:28:02.219 { 00:28:02.219 "name": "BaseBdev2", 00:28:02.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.219 "is_configured": false, 00:28:02.219 "data_offset": 0, 00:28:02.219 "data_size": 0 00:28:02.219 } 00:28:02.219 ] 00:28:02.219 }' 00:28:02.219 10:38:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.219 10:38:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:03.154 10:38:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:03.154 [2024-07-26 10:38:15.896083] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:03.154 [2024-07-26 10:38:15.896119] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26a4610 name Existed_Raid, state configuring 00:28:03.154 10:38:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:03.412 [2024-07-26 10:38:16.120712] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:03.412 [2024-07-26 10:38:16.122051] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:03.412 [2024-07-26 10:38:16.122082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.412 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.670 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.670 "name": "Existed_Raid", 00:28:03.670 "uuid": "e8a8b905-4795-42f3-aea0-3ecdeade3547", 00:28:03.670 "strip_size_kb": 0, 00:28:03.670 "state": "configuring", 00:28:03.670 "raid_level": "raid1", 00:28:03.670 "superblock": true, 00:28:03.670 "num_base_bdevs": 2, 00:28:03.670 "num_base_bdevs_discovered": 1, 00:28:03.670 "num_base_bdevs_operational": 2, 00:28:03.670 "base_bdevs_list": [ 00:28:03.670 { 00:28:03.670 "name": "BaseBdev1", 00:28:03.670 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:03.670 "is_configured": true, 00:28:03.670 "data_offset": 256, 00:28:03.670 "data_size": 7936 00:28:03.670 }, 00:28:03.670 { 00:28:03.670 "name": "BaseBdev2", 00:28:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.670 "is_configured": false, 00:28:03.670 "data_offset": 0, 00:28:03.670 "data_size": 0 00:28:03.670 } 00:28:03.670 ] 00:28:03.670 }' 00:28:03.670 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.670 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:04.236 10:38:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:28:04.494 [2024-07-26 10:38:17.154659] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:04.494 [2024-07-26 10:38:17.154792] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x28570f0 00:28:04.494 [2024-07-26 10:38:17.154804] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:04.494 [2024-07-26 10:38:17.154964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a39a0 00:28:04.494 [2024-07-26 10:38:17.155075] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28570f0 00:28:04.494 [2024-07-26 10:38:17.155084] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x28570f0 00:28:04.494 [2024-07-26 10:38:17.155177] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.494 BaseBdev2 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:04.494 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:04.752 [ 00:28:04.752 { 00:28:04.752 "name": "BaseBdev2", 00:28:04.752 "aliases": [ 00:28:04.752 "15623d84-f2d9-437c-9859-cec52689f5b3" 00:28:04.752 ], 00:28:04.752 "product_name": "Malloc disk", 00:28:04.752 "block_size": 4096, 00:28:04.752 "num_blocks": 8192, 00:28:04.752 "uuid": "15623d84-f2d9-437c-9859-cec52689f5b3", 00:28:04.752 "assigned_rate_limits": { 00:28:04.752 "rw_ios_per_sec": 0, 00:28:04.752 "rw_mbytes_per_sec": 0, 00:28:04.752 "r_mbytes_per_sec": 0, 00:28:04.752 "w_mbytes_per_sec": 0 00:28:04.752 }, 00:28:04.752 "claimed": true, 00:28:04.752 "claim_type": "exclusive_write", 00:28:04.752 "zoned": false, 00:28:04.752 "supported_io_types": { 00:28:04.752 "read": true, 00:28:04.752 "write": true, 00:28:04.752 "unmap": true, 00:28:04.752 "flush": true, 00:28:04.752 "reset": true, 00:28:04.752 "nvme_admin": false, 00:28:04.752 "nvme_io": false, 00:28:04.752 "nvme_io_md": false, 00:28:04.752 "write_zeroes": true, 00:28:04.752 "zcopy": true, 00:28:04.752 "get_zone_info": false, 00:28:04.752 "zone_management": false, 00:28:04.752 "zone_append": false, 00:28:04.752 "compare": false, 00:28:04.752 "compare_and_write": false, 00:28:04.752 "abort": true, 00:28:04.752 "seek_hole": false, 00:28:04.752 "seek_data": false, 00:28:04.752 "copy": true, 00:28:04.752 "nvme_iov_md": false 00:28:04.752 }, 00:28:04.752 "memory_domains": [ 00:28:04.752 { 00:28:04.752 "dma_device_id": "system", 00:28:04.752 "dma_device_type": 1 00:28:04.752 }, 00:28:04.752 { 00:28:04.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.752 "dma_device_type": 2 00:28:04.752 } 00:28:04.752 ], 00:28:04.752 "driver_specific": {} 00:28:04.752 } 00:28:04.752 ] 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.752 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.011 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.011 "name": "Existed_Raid", 00:28:05.011 "uuid": "e8a8b905-4795-42f3-aea0-3ecdeade3547", 00:28:05.011 "strip_size_kb": 0, 00:28:05.011 "state": "online", 00:28:05.011 "raid_level": "raid1", 00:28:05.011 "superblock": true, 00:28:05.011 "num_base_bdevs": 2, 00:28:05.011 "num_base_bdevs_discovered": 2, 00:28:05.011 "num_base_bdevs_operational": 2, 00:28:05.011 "base_bdevs_list": [ 00:28:05.011 { 00:28:05.011 "name": "BaseBdev1", 00:28:05.011 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:05.011 "is_configured": true, 00:28:05.011 "data_offset": 256, 00:28:05.011 "data_size": 7936 00:28:05.011 }, 00:28:05.011 { 00:28:05.011 "name": "BaseBdev2", 00:28:05.011 "uuid": "15623d84-f2d9-437c-9859-cec52689f5b3", 00:28:05.011 "is_configured": true, 00:28:05.011 "data_offset": 256, 00:28:05.011 "data_size": 7936 00:28:05.011 } 00:28:05.011 ] 00:28:05.011 }' 00:28:05.011 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.011 10:38:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:05.579 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:05.838 [2024-07-26 10:38:18.626803] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:05.838 "name": "Existed_Raid", 00:28:05.838 "aliases": [ 00:28:05.838 "e8a8b905-4795-42f3-aea0-3ecdeade3547" 00:28:05.838 ], 00:28:05.838 "product_name": "Raid Volume", 00:28:05.838 "block_size": 4096, 00:28:05.838 "num_blocks": 7936, 00:28:05.838 "uuid": "e8a8b905-4795-42f3-aea0-3ecdeade3547", 00:28:05.838 "assigned_rate_limits": { 00:28:05.838 "rw_ios_per_sec": 0, 00:28:05.838 "rw_mbytes_per_sec": 0, 00:28:05.838 "r_mbytes_per_sec": 0, 00:28:05.838 "w_mbytes_per_sec": 0 00:28:05.838 }, 00:28:05.838 "claimed": false, 00:28:05.838 "zoned": false, 00:28:05.838 "supported_io_types": { 00:28:05.838 "read": true, 00:28:05.838 "write": true, 00:28:05.838 "unmap": false, 00:28:05.838 "flush": false, 00:28:05.838 "reset": true, 00:28:05.838 "nvme_admin": false, 00:28:05.838 "nvme_io": false, 00:28:05.838 "nvme_io_md": false, 00:28:05.838 "write_zeroes": true, 00:28:05.838 "zcopy": false, 00:28:05.838 "get_zone_info": false, 00:28:05.838 "zone_management": false, 00:28:05.838 "zone_append": false, 00:28:05.838 "compare": false, 00:28:05.838 "compare_and_write": false, 00:28:05.838 "abort": false, 00:28:05.838 "seek_hole": false, 00:28:05.838 "seek_data": false, 00:28:05.838 "copy": false, 00:28:05.838 "nvme_iov_md": false 00:28:05.838 }, 00:28:05.838 "memory_domains": [ 00:28:05.838 { 00:28:05.838 "dma_device_id": "system", 00:28:05.838 "dma_device_type": 1 00:28:05.838 }, 00:28:05.838 { 00:28:05.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.838 "dma_device_type": 2 00:28:05.838 }, 00:28:05.838 { 00:28:05.838 "dma_device_id": "system", 00:28:05.838 "dma_device_type": 1 00:28:05.838 }, 00:28:05.838 { 00:28:05.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.838 "dma_device_type": 2 00:28:05.838 } 00:28:05.838 ], 00:28:05.838 "driver_specific": { 00:28:05.838 "raid": { 00:28:05.838 "uuid": "e8a8b905-4795-42f3-aea0-3ecdeade3547", 00:28:05.838 "strip_size_kb": 0, 00:28:05.838 "state": "online", 00:28:05.838 "raid_level": "raid1", 00:28:05.838 "superblock": true, 00:28:05.838 "num_base_bdevs": 2, 00:28:05.838 "num_base_bdevs_discovered": 2, 00:28:05.838 "num_base_bdevs_operational": 2, 00:28:05.838 "base_bdevs_list": [ 00:28:05.838 { 00:28:05.838 "name": "BaseBdev1", 00:28:05.838 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:05.838 "is_configured": true, 00:28:05.838 "data_offset": 256, 00:28:05.838 "data_size": 7936 00:28:05.838 }, 00:28:05.838 { 00:28:05.838 "name": "BaseBdev2", 00:28:05.838 "uuid": "15623d84-f2d9-437c-9859-cec52689f5b3", 00:28:05.838 "is_configured": true, 00:28:05.838 "data_offset": 256, 00:28:05.838 "data_size": 7936 00:28:05.838 } 00:28:05.838 ] 00:28:05.838 } 00:28:05.838 } 00:28:05.838 }' 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:05.838 BaseBdev2' 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:05.838 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:06.097 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:06.097 "name": "BaseBdev1", 00:28:06.097 "aliases": [ 00:28:06.097 "b22889d3-3eb5-4156-b7cf-e03c760db512" 00:28:06.097 ], 00:28:06.097 "product_name": "Malloc disk", 00:28:06.097 "block_size": 4096, 00:28:06.097 "num_blocks": 8192, 00:28:06.097 "uuid": "b22889d3-3eb5-4156-b7cf-e03c760db512", 00:28:06.097 "assigned_rate_limits": { 00:28:06.097 "rw_ios_per_sec": 0, 00:28:06.097 "rw_mbytes_per_sec": 0, 00:28:06.097 "r_mbytes_per_sec": 0, 00:28:06.097 "w_mbytes_per_sec": 0 00:28:06.097 }, 00:28:06.097 "claimed": true, 00:28:06.097 "claim_type": "exclusive_write", 00:28:06.097 "zoned": false, 00:28:06.097 "supported_io_types": { 00:28:06.097 "read": true, 00:28:06.097 "write": true, 00:28:06.097 "unmap": true, 00:28:06.097 "flush": true, 00:28:06.097 "reset": true, 00:28:06.097 "nvme_admin": false, 00:28:06.097 "nvme_io": false, 00:28:06.097 "nvme_io_md": false, 00:28:06.097 "write_zeroes": true, 00:28:06.097 "zcopy": true, 00:28:06.097 "get_zone_info": false, 00:28:06.097 "zone_management": false, 00:28:06.097 "zone_append": false, 00:28:06.097 "compare": false, 00:28:06.097 "compare_and_write": false, 00:28:06.097 "abort": true, 00:28:06.097 "seek_hole": false, 00:28:06.097 "seek_data": false, 00:28:06.097 "copy": true, 00:28:06.097 "nvme_iov_md": false 00:28:06.097 }, 00:28:06.097 "memory_domains": [ 00:28:06.097 { 00:28:06.097 "dma_device_id": "system", 00:28:06.097 "dma_device_type": 1 00:28:06.097 }, 00:28:06.097 { 00:28:06.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.097 "dma_device_type": 2 00:28:06.097 } 00:28:06.097 ], 00:28:06.097 "driver_specific": {} 00:28:06.097 }' 00:28:06.097 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:06.097 10:38:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:06.356 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:06.615 "name": "BaseBdev2", 00:28:06.615 "aliases": [ 00:28:06.615 "15623d84-f2d9-437c-9859-cec52689f5b3" 00:28:06.615 ], 00:28:06.615 "product_name": "Malloc disk", 00:28:06.615 "block_size": 4096, 00:28:06.615 "num_blocks": 8192, 00:28:06.615 "uuid": "15623d84-f2d9-437c-9859-cec52689f5b3", 00:28:06.615 "assigned_rate_limits": { 00:28:06.615 "rw_ios_per_sec": 0, 00:28:06.615 "rw_mbytes_per_sec": 0, 00:28:06.615 "r_mbytes_per_sec": 0, 00:28:06.615 "w_mbytes_per_sec": 0 00:28:06.615 }, 00:28:06.615 "claimed": true, 00:28:06.615 "claim_type": "exclusive_write", 00:28:06.615 "zoned": false, 00:28:06.615 "supported_io_types": { 00:28:06.615 "read": true, 00:28:06.615 "write": true, 00:28:06.615 "unmap": true, 00:28:06.615 "flush": true, 00:28:06.615 "reset": true, 00:28:06.615 "nvme_admin": false, 00:28:06.615 "nvme_io": false, 00:28:06.615 "nvme_io_md": false, 00:28:06.615 "write_zeroes": true, 00:28:06.615 "zcopy": true, 00:28:06.615 "get_zone_info": false, 00:28:06.615 "zone_management": false, 00:28:06.615 "zone_append": false, 00:28:06.615 "compare": false, 00:28:06.615 "compare_and_write": false, 00:28:06.615 "abort": true, 00:28:06.615 "seek_hole": false, 00:28:06.615 "seek_data": false, 00:28:06.615 "copy": true, 00:28:06.615 "nvme_iov_md": false 00:28:06.615 }, 00:28:06.615 "memory_domains": [ 00:28:06.615 { 00:28:06.615 "dma_device_id": "system", 00:28:06.615 "dma_device_type": 1 00:28:06.615 }, 00:28:06.615 { 00:28:06.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.615 "dma_device_type": 2 00:28:06.615 } 00:28:06.615 ], 00:28:06.615 "driver_specific": {} 00:28:06.615 }' 00:28:06.615 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:06.874 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:07.181 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:07.181 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:07.181 10:38:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:07.181 [2024-07-26 10:38:20.046348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.181 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.454 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.454 "name": "Existed_Raid", 00:28:07.454 "uuid": "e8a8b905-4795-42f3-aea0-3ecdeade3547", 00:28:07.454 "strip_size_kb": 0, 00:28:07.454 "state": "online", 00:28:07.454 "raid_level": "raid1", 00:28:07.454 "superblock": true, 00:28:07.454 "num_base_bdevs": 2, 00:28:07.454 "num_base_bdevs_discovered": 1, 00:28:07.454 "num_base_bdevs_operational": 1, 00:28:07.454 "base_bdevs_list": [ 00:28:07.454 { 00:28:07.454 "name": null, 00:28:07.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.454 "is_configured": false, 00:28:07.454 "data_offset": 256, 00:28:07.454 "data_size": 7936 00:28:07.454 }, 00:28:07.454 { 00:28:07.454 "name": "BaseBdev2", 00:28:07.454 "uuid": "15623d84-f2d9-437c-9859-cec52689f5b3", 00:28:07.454 "is_configured": true, 00:28:07.454 "data_offset": 256, 00:28:07.454 "data_size": 7936 00:28:07.454 } 00:28:07.454 ] 00:28:07.454 }' 00:28:07.454 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.454 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:08.023 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:08.023 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:08.023 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.023 10:38:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:08.282 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:08.282 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:08.282 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:08.541 [2024-07-26 10:38:21.330776] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:08.541 [2024-07-26 10:38:21.330849] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:08.541 [2024-07-26 10:38:21.340757] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:08.541 [2024-07-26 10:38:21.340784] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:08.541 [2024-07-26 10:38:21.340794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28570f0 name Existed_Raid, state offline 00:28:08.541 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:08.541 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:08.541 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.541 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 3511121 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 3511121 ']' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 3511121 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3511121 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3511121' 00:28:08.866 killing process with pid 3511121 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 3511121 00:28:08.866 [2024-07-26 10:38:21.654681] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:08.866 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 3511121 00:28:08.866 [2024-07-26 10:38:21.655522] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:09.125 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:28:09.125 00:28:09.125 real 0m10.083s 00:28:09.125 user 0m17.924s 00:28:09.125 sys 0m1.915s 00:28:09.125 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.125 10:38:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:09.125 ************************************ 00:28:09.125 END TEST raid_state_function_test_sb_4k 00:28:09.125 ************************************ 00:28:09.125 10:38:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:28:09.125 10:38:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:09.125 10:38:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.125 10:38:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.125 ************************************ 00:28:09.125 START TEST raid_superblock_test_4k 00:28:09.125 ************************************ 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=3513015 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 3513015 /var/tmp/spdk-raid.sock 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 3513015 ']' 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:09.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.125 10:38:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:09.125 [2024-07-26 10:38:21.979218] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:09.125 [2024-07-26 10:38:21.979277] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513015 ] 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:09.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:09.385 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:09.385 [2024-07-26 10:38:22.111872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.385 [2024-07-26 10:38:22.155804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.385 [2024-07-26 10:38:22.217802] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:09.385 [2024-07-26 10:38:22.217839] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:09.954 10:38:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:28:10.213 malloc1 00:28:10.213 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:10.472 [2024-07-26 10:38:23.245527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:10.472 [2024-07-26 10:38:23.245570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.472 [2024-07-26 10:38:23.245589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb6270 00:28:10.472 [2024-07-26 10:38:23.245601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.472 [2024-07-26 10:38:23.247006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.472 [2024-07-26 10:38:23.247031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:10.472 pt1 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:10.472 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:28:10.731 malloc2 00:28:10.731 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:10.990 [2024-07-26 10:38:23.711007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:10.990 [2024-07-26 10:38:23.711049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.990 [2024-07-26 10:38:23.711065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd722f0 00:28:10.990 [2024-07-26 10:38:23.711076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.990 [2024-07-26 10:38:23.712551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.990 [2024-07-26 10:38:23.712576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:10.990 pt2 00:28:10.990 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:10.990 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:10.990 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:11.249 [2024-07-26 10:38:23.935608] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:11.249 [2024-07-26 10:38:23.936896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:11.249 [2024-07-26 10:38:23.937015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3bf20 00:28:11.249 [2024-07-26 10:38:23.937026] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:11.249 [2024-07-26 10:38:23.937214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc05f70 00:28:11.249 [2024-07-26 10:38:23.937338] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3bf20 00:28:11.249 [2024-07-26 10:38:23.937347] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd3bf20 00:28:11.249 [2024-07-26 10:38:23.937451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.249 10:38:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.508 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:11.508 "name": "raid_bdev1", 00:28:11.508 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:11.508 "strip_size_kb": 0, 00:28:11.508 "state": "online", 00:28:11.508 "raid_level": "raid1", 00:28:11.508 "superblock": true, 00:28:11.508 "num_base_bdevs": 2, 00:28:11.508 "num_base_bdevs_discovered": 2, 00:28:11.508 "num_base_bdevs_operational": 2, 00:28:11.508 "base_bdevs_list": [ 00:28:11.508 { 00:28:11.508 "name": "pt1", 00:28:11.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:11.508 "is_configured": true, 00:28:11.508 "data_offset": 256, 00:28:11.508 "data_size": 7936 00:28:11.508 }, 00:28:11.508 { 00:28:11.508 "name": "pt2", 00:28:11.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:11.508 "is_configured": true, 00:28:11.508 "data_offset": 256, 00:28:11.508 "data_size": 7936 00:28:11.508 } 00:28:11.508 ] 00:28:11.508 }' 00:28:11.508 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:11.508 10:38:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:12.077 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:12.077 [2024-07-26 10:38:24.962524] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:12.336 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:12.336 "name": "raid_bdev1", 00:28:12.336 "aliases": [ 00:28:12.336 "39111a0e-ae6b-478e-82fd-26161b53b641" 00:28:12.336 ], 00:28:12.336 "product_name": "Raid Volume", 00:28:12.336 "block_size": 4096, 00:28:12.336 "num_blocks": 7936, 00:28:12.336 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:12.336 "assigned_rate_limits": { 00:28:12.336 "rw_ios_per_sec": 0, 00:28:12.336 "rw_mbytes_per_sec": 0, 00:28:12.336 "r_mbytes_per_sec": 0, 00:28:12.336 "w_mbytes_per_sec": 0 00:28:12.336 }, 00:28:12.336 "claimed": false, 00:28:12.336 "zoned": false, 00:28:12.336 "supported_io_types": { 00:28:12.336 "read": true, 00:28:12.336 "write": true, 00:28:12.336 "unmap": false, 00:28:12.336 "flush": false, 00:28:12.336 "reset": true, 00:28:12.336 "nvme_admin": false, 00:28:12.336 "nvme_io": false, 00:28:12.336 "nvme_io_md": false, 00:28:12.336 "write_zeroes": true, 00:28:12.336 "zcopy": false, 00:28:12.336 "get_zone_info": false, 00:28:12.336 "zone_management": false, 00:28:12.336 "zone_append": false, 00:28:12.336 "compare": false, 00:28:12.336 "compare_and_write": false, 00:28:12.336 "abort": false, 00:28:12.336 "seek_hole": false, 00:28:12.336 "seek_data": false, 00:28:12.336 "copy": false, 00:28:12.336 "nvme_iov_md": false 00:28:12.336 }, 00:28:12.336 "memory_domains": [ 00:28:12.336 { 00:28:12.336 "dma_device_id": "system", 00:28:12.336 "dma_device_type": 1 00:28:12.336 }, 00:28:12.336 { 00:28:12.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.336 "dma_device_type": 2 00:28:12.336 }, 00:28:12.336 { 00:28:12.336 "dma_device_id": "system", 00:28:12.336 "dma_device_type": 1 00:28:12.336 }, 00:28:12.336 { 00:28:12.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.336 "dma_device_type": 2 00:28:12.336 } 00:28:12.336 ], 00:28:12.336 "driver_specific": { 00:28:12.336 "raid": { 00:28:12.336 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:12.336 "strip_size_kb": 0, 00:28:12.336 "state": "online", 00:28:12.336 "raid_level": "raid1", 00:28:12.336 "superblock": true, 00:28:12.336 "num_base_bdevs": 2, 00:28:12.336 "num_base_bdevs_discovered": 2, 00:28:12.336 "num_base_bdevs_operational": 2, 00:28:12.336 "base_bdevs_list": [ 00:28:12.336 { 00:28:12.336 "name": "pt1", 00:28:12.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:12.337 "is_configured": true, 00:28:12.337 "data_offset": 256, 00:28:12.337 "data_size": 7936 00:28:12.337 }, 00:28:12.337 { 00:28:12.337 "name": "pt2", 00:28:12.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:12.337 "is_configured": true, 00:28:12.337 "data_offset": 256, 00:28:12.337 "data_size": 7936 00:28:12.337 } 00:28:12.337 ] 00:28:12.337 } 00:28:12.337 } 00:28:12.337 }' 00:28:12.337 10:38:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:12.337 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:12.337 pt2' 00:28:12.337 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.337 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:12.337 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.596 "name": "pt1", 00:28:12.596 "aliases": [ 00:28:12.596 "00000000-0000-0000-0000-000000000001" 00:28:12.596 ], 00:28:12.596 "product_name": "passthru", 00:28:12.596 "block_size": 4096, 00:28:12.596 "num_blocks": 8192, 00:28:12.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:12.596 "assigned_rate_limits": { 00:28:12.596 "rw_ios_per_sec": 0, 00:28:12.596 "rw_mbytes_per_sec": 0, 00:28:12.596 "r_mbytes_per_sec": 0, 00:28:12.596 "w_mbytes_per_sec": 0 00:28:12.596 }, 00:28:12.596 "claimed": true, 00:28:12.596 "claim_type": "exclusive_write", 00:28:12.596 "zoned": false, 00:28:12.596 "supported_io_types": { 00:28:12.596 "read": true, 00:28:12.596 "write": true, 00:28:12.596 "unmap": true, 00:28:12.596 "flush": true, 00:28:12.596 "reset": true, 00:28:12.596 "nvme_admin": false, 00:28:12.596 "nvme_io": false, 00:28:12.596 "nvme_io_md": false, 00:28:12.596 "write_zeroes": true, 00:28:12.596 "zcopy": true, 00:28:12.596 "get_zone_info": false, 00:28:12.596 "zone_management": false, 00:28:12.596 "zone_append": false, 00:28:12.596 "compare": false, 00:28:12.596 "compare_and_write": false, 00:28:12.596 "abort": true, 00:28:12.596 "seek_hole": false, 00:28:12.596 "seek_data": false, 00:28:12.596 "copy": true, 00:28:12.596 "nvme_iov_md": false 00:28:12.596 }, 00:28:12.596 "memory_domains": [ 00:28:12.596 { 00:28:12.596 "dma_device_id": "system", 00:28:12.596 "dma_device_type": 1 00:28:12.596 }, 00:28:12.596 { 00:28:12.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.596 "dma_device_type": 2 00:28:12.596 } 00:28:12.596 ], 00:28:12.596 "driver_specific": { 00:28:12.596 "passthru": { 00:28:12.596 "name": "pt1", 00:28:12.596 "base_bdev_name": "malloc1" 00:28:12.596 } 00:28:12.596 } 00:28:12.596 }' 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.596 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.855 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:12.856 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:13.115 "name": "pt2", 00:28:13.115 "aliases": [ 00:28:13.115 "00000000-0000-0000-0000-000000000002" 00:28:13.115 ], 00:28:13.115 "product_name": "passthru", 00:28:13.115 "block_size": 4096, 00:28:13.115 "num_blocks": 8192, 00:28:13.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.115 "assigned_rate_limits": { 00:28:13.115 "rw_ios_per_sec": 0, 00:28:13.115 "rw_mbytes_per_sec": 0, 00:28:13.115 "r_mbytes_per_sec": 0, 00:28:13.115 "w_mbytes_per_sec": 0 00:28:13.115 }, 00:28:13.115 "claimed": true, 00:28:13.115 "claim_type": "exclusive_write", 00:28:13.115 "zoned": false, 00:28:13.115 "supported_io_types": { 00:28:13.115 "read": true, 00:28:13.115 "write": true, 00:28:13.115 "unmap": true, 00:28:13.115 "flush": true, 00:28:13.115 "reset": true, 00:28:13.115 "nvme_admin": false, 00:28:13.115 "nvme_io": false, 00:28:13.115 "nvme_io_md": false, 00:28:13.115 "write_zeroes": true, 00:28:13.115 "zcopy": true, 00:28:13.115 "get_zone_info": false, 00:28:13.115 "zone_management": false, 00:28:13.115 "zone_append": false, 00:28:13.115 "compare": false, 00:28:13.115 "compare_and_write": false, 00:28:13.115 "abort": true, 00:28:13.115 "seek_hole": false, 00:28:13.115 "seek_data": false, 00:28:13.115 "copy": true, 00:28:13.115 "nvme_iov_md": false 00:28:13.115 }, 00:28:13.115 "memory_domains": [ 00:28:13.115 { 00:28:13.115 "dma_device_id": "system", 00:28:13.115 "dma_device_type": 1 00:28:13.115 }, 00:28:13.115 { 00:28:13.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.115 "dma_device_type": 2 00:28:13.115 } 00:28:13.115 ], 00:28:13.115 "driver_specific": { 00:28:13.115 "passthru": { 00:28:13.115 "name": "pt2", 00:28:13.115 "base_bdev_name": "malloc2" 00:28:13.115 } 00:28:13.115 } 00:28:13.115 }' 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:13.115 10:38:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:13.374 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:13.633 [2024-07-26 10:38:26.362213] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:13.633 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=39111a0e-ae6b-478e-82fd-26161b53b641 00:28:13.633 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 39111a0e-ae6b-478e-82fd-26161b53b641 ']' 00:28:13.633 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:13.893 [2024-07-26 10:38:26.590585] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:13.893 [2024-07-26 10:38:26.590603] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:13.893 [2024-07-26 10:38:26.590649] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.893 [2024-07-26 10:38:26.590695] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.893 [2024-07-26 10:38:26.590706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3bf20 name raid_bdev1, state offline 00:28:13.893 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.893 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:14.151 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:14.151 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:14.152 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:14.152 10:38:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:14.411 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:14.411 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:14.411 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:14.411 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:14.670 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:14.929 [2024-07-26 10:38:27.737732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:14.929 [2024-07-26 10:38:27.738990] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:14.929 [2024-07-26 10:38:27.739040] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:14.929 [2024-07-26 10:38:27.739076] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:14.929 [2024-07-26 10:38:27.739093] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.929 [2024-07-26 10:38:27.739101] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd71a60 name raid_bdev1, state configuring 00:28:14.929 request: 00:28:14.929 { 00:28:14.929 "name": "raid_bdev1", 00:28:14.929 "raid_level": "raid1", 00:28:14.929 "base_bdevs": [ 00:28:14.929 "malloc1", 00:28:14.929 "malloc2" 00:28:14.929 ], 00:28:14.929 "superblock": false, 00:28:14.929 "method": "bdev_raid_create", 00:28:14.929 "req_id": 1 00:28:14.929 } 00:28:14.929 Got JSON-RPC error response 00:28:14.929 response: 00:28:14.929 { 00:28:14.929 "code": -17, 00:28:14.929 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:14.929 } 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.929 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:15.188 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:15.188 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:15.188 10:38:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:15.447 [2024-07-26 10:38:28.190874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:15.447 [2024-07-26 10:38:28.190918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.447 [2024-07-26 10:38:28.190934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb73a0 00:28:15.447 [2024-07-26 10:38:28.190945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.447 [2024-07-26 10:38:28.192400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.447 [2024-07-26 10:38:28.192426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:15.447 [2024-07-26 10:38:28.192483] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:15.447 [2024-07-26 10:38:28.192506] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:15.447 pt1 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:15.447 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:15.448 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:15.448 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:15.448 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.448 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.707 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:15.707 "name": "raid_bdev1", 00:28:15.707 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:15.707 "strip_size_kb": 0, 00:28:15.707 "state": "configuring", 00:28:15.707 "raid_level": "raid1", 00:28:15.707 "superblock": true, 00:28:15.707 "num_base_bdevs": 2, 00:28:15.707 "num_base_bdevs_discovered": 1, 00:28:15.707 "num_base_bdevs_operational": 2, 00:28:15.707 "base_bdevs_list": [ 00:28:15.707 { 00:28:15.707 "name": "pt1", 00:28:15.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.707 "is_configured": true, 00:28:15.707 "data_offset": 256, 00:28:15.707 "data_size": 7936 00:28:15.707 }, 00:28:15.707 { 00:28:15.707 "name": null, 00:28:15.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.707 "is_configured": false, 00:28:15.707 "data_offset": 256, 00:28:15.707 "data_size": 7936 00:28:15.707 } 00:28:15.707 ] 00:28:15.707 }' 00:28:15.707 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:15.707 10:38:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:16.275 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:28:16.275 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:16.275 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:16.275 10:38:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:16.275 [2024-07-26 10:38:29.165459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:16.275 [2024-07-26 10:38:29.165503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.275 [2024-07-26 10:38:29.165521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb7a50 00:28:16.275 [2024-07-26 10:38:29.165532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.275 [2024-07-26 10:38:29.165830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.275 [2024-07-26 10:38:29.165847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:16.275 [2024-07-26 10:38:29.165902] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:16.275 [2024-07-26 10:38:29.165918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.275 [2024-07-26 10:38:29.166007] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3f5c0 00:28:16.275 [2024-07-26 10:38:29.166016] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:16.275 [2024-07-26 10:38:29.166175] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd3dee0 00:28:16.275 [2024-07-26 10:38:29.166291] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3f5c0 00:28:16.275 [2024-07-26 10:38:29.166300] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd3f5c0 00:28:16.275 [2024-07-26 10:38:29.166386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.275 pt2 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.535 "name": "raid_bdev1", 00:28:16.535 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:16.535 "strip_size_kb": 0, 00:28:16.535 "state": "online", 00:28:16.535 "raid_level": "raid1", 00:28:16.535 "superblock": true, 00:28:16.535 "num_base_bdevs": 2, 00:28:16.535 "num_base_bdevs_discovered": 2, 00:28:16.535 "num_base_bdevs_operational": 2, 00:28:16.535 "base_bdevs_list": [ 00:28:16.535 { 00:28:16.535 "name": "pt1", 00:28:16.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:16.535 "is_configured": true, 00:28:16.535 "data_offset": 256, 00:28:16.535 "data_size": 7936 00:28:16.535 }, 00:28:16.535 { 00:28:16.535 "name": "pt2", 00:28:16.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.535 "is_configured": true, 00:28:16.535 "data_offset": 256, 00:28:16.535 "data_size": 7936 00:28:16.535 } 00:28:16.535 ] 00:28:16.535 }' 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.535 10:38:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:17.472 [2024-07-26 10:38:30.228479] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.472 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:17.472 "name": "raid_bdev1", 00:28:17.472 "aliases": [ 00:28:17.472 "39111a0e-ae6b-478e-82fd-26161b53b641" 00:28:17.472 ], 00:28:17.472 "product_name": "Raid Volume", 00:28:17.472 "block_size": 4096, 00:28:17.472 "num_blocks": 7936, 00:28:17.472 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:17.472 "assigned_rate_limits": { 00:28:17.472 "rw_ios_per_sec": 0, 00:28:17.472 "rw_mbytes_per_sec": 0, 00:28:17.472 "r_mbytes_per_sec": 0, 00:28:17.472 "w_mbytes_per_sec": 0 00:28:17.472 }, 00:28:17.472 "claimed": false, 00:28:17.472 "zoned": false, 00:28:17.472 "supported_io_types": { 00:28:17.472 "read": true, 00:28:17.472 "write": true, 00:28:17.473 "unmap": false, 00:28:17.473 "flush": false, 00:28:17.473 "reset": true, 00:28:17.473 "nvme_admin": false, 00:28:17.473 "nvme_io": false, 00:28:17.473 "nvme_io_md": false, 00:28:17.473 "write_zeroes": true, 00:28:17.473 "zcopy": false, 00:28:17.473 "get_zone_info": false, 00:28:17.473 "zone_management": false, 00:28:17.473 "zone_append": false, 00:28:17.473 "compare": false, 00:28:17.473 "compare_and_write": false, 00:28:17.473 "abort": false, 00:28:17.473 "seek_hole": false, 00:28:17.473 "seek_data": false, 00:28:17.473 "copy": false, 00:28:17.473 "nvme_iov_md": false 00:28:17.473 }, 00:28:17.473 "memory_domains": [ 00:28:17.473 { 00:28:17.473 "dma_device_id": "system", 00:28:17.473 "dma_device_type": 1 00:28:17.473 }, 00:28:17.473 { 00:28:17.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.473 "dma_device_type": 2 00:28:17.473 }, 00:28:17.473 { 00:28:17.473 "dma_device_id": "system", 00:28:17.473 "dma_device_type": 1 00:28:17.473 }, 00:28:17.473 { 00:28:17.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.473 "dma_device_type": 2 00:28:17.473 } 00:28:17.473 ], 00:28:17.473 "driver_specific": { 00:28:17.473 "raid": { 00:28:17.473 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:17.473 "strip_size_kb": 0, 00:28:17.473 "state": "online", 00:28:17.473 "raid_level": "raid1", 00:28:17.473 "superblock": true, 00:28:17.473 "num_base_bdevs": 2, 00:28:17.473 "num_base_bdevs_discovered": 2, 00:28:17.473 "num_base_bdevs_operational": 2, 00:28:17.473 "base_bdevs_list": [ 00:28:17.473 { 00:28:17.473 "name": "pt1", 00:28:17.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:17.473 "is_configured": true, 00:28:17.473 "data_offset": 256, 00:28:17.473 "data_size": 7936 00:28:17.473 }, 00:28:17.473 { 00:28:17.473 "name": "pt2", 00:28:17.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:17.473 "is_configured": true, 00:28:17.473 "data_offset": 256, 00:28:17.473 "data_size": 7936 00:28:17.473 } 00:28:17.473 ] 00:28:17.473 } 00:28:17.473 } 00:28:17.473 }' 00:28:17.473 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:17.473 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:17.473 pt2' 00:28:17.473 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:17.473 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:17.473 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:17.731 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:17.731 "name": "pt1", 00:28:17.731 "aliases": [ 00:28:17.731 "00000000-0000-0000-0000-000000000001" 00:28:17.731 ], 00:28:17.731 "product_name": "passthru", 00:28:17.731 "block_size": 4096, 00:28:17.731 "num_blocks": 8192, 00:28:17.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:17.731 "assigned_rate_limits": { 00:28:17.731 "rw_ios_per_sec": 0, 00:28:17.731 "rw_mbytes_per_sec": 0, 00:28:17.731 "r_mbytes_per_sec": 0, 00:28:17.731 "w_mbytes_per_sec": 0 00:28:17.731 }, 00:28:17.731 "claimed": true, 00:28:17.731 "claim_type": "exclusive_write", 00:28:17.731 "zoned": false, 00:28:17.731 "supported_io_types": { 00:28:17.731 "read": true, 00:28:17.731 "write": true, 00:28:17.731 "unmap": true, 00:28:17.731 "flush": true, 00:28:17.731 "reset": true, 00:28:17.731 "nvme_admin": false, 00:28:17.731 "nvme_io": false, 00:28:17.731 "nvme_io_md": false, 00:28:17.731 "write_zeroes": true, 00:28:17.731 "zcopy": true, 00:28:17.731 "get_zone_info": false, 00:28:17.731 "zone_management": false, 00:28:17.731 "zone_append": false, 00:28:17.731 "compare": false, 00:28:17.731 "compare_and_write": false, 00:28:17.731 "abort": true, 00:28:17.731 "seek_hole": false, 00:28:17.731 "seek_data": false, 00:28:17.731 "copy": true, 00:28:17.731 "nvme_iov_md": false 00:28:17.731 }, 00:28:17.731 "memory_domains": [ 00:28:17.731 { 00:28:17.731 "dma_device_id": "system", 00:28:17.731 "dma_device_type": 1 00:28:17.731 }, 00:28:17.731 { 00:28:17.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.731 "dma_device_type": 2 00:28:17.731 } 00:28:17.731 ], 00:28:17.731 "driver_specific": { 00:28:17.731 "passthru": { 00:28:17.731 "name": "pt1", 00:28:17.731 "base_bdev_name": "malloc1" 00:28:17.731 } 00:28:17.731 } 00:28:17.731 }' 00:28:17.731 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.731 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.731 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:17.731 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:17.991 10:38:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:18.250 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:18.250 "name": "pt2", 00:28:18.250 "aliases": [ 00:28:18.250 "00000000-0000-0000-0000-000000000002" 00:28:18.250 ], 00:28:18.250 "product_name": "passthru", 00:28:18.250 "block_size": 4096, 00:28:18.250 "num_blocks": 8192, 00:28:18.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:18.250 "assigned_rate_limits": { 00:28:18.250 "rw_ios_per_sec": 0, 00:28:18.250 "rw_mbytes_per_sec": 0, 00:28:18.250 "r_mbytes_per_sec": 0, 00:28:18.250 "w_mbytes_per_sec": 0 00:28:18.250 }, 00:28:18.250 "claimed": true, 00:28:18.250 "claim_type": "exclusive_write", 00:28:18.250 "zoned": false, 00:28:18.250 "supported_io_types": { 00:28:18.250 "read": true, 00:28:18.250 "write": true, 00:28:18.250 "unmap": true, 00:28:18.250 "flush": true, 00:28:18.250 "reset": true, 00:28:18.250 "nvme_admin": false, 00:28:18.250 "nvme_io": false, 00:28:18.250 "nvme_io_md": false, 00:28:18.250 "write_zeroes": true, 00:28:18.250 "zcopy": true, 00:28:18.250 "get_zone_info": false, 00:28:18.250 "zone_management": false, 00:28:18.250 "zone_append": false, 00:28:18.250 "compare": false, 00:28:18.250 "compare_and_write": false, 00:28:18.250 "abort": true, 00:28:18.250 "seek_hole": false, 00:28:18.250 "seek_data": false, 00:28:18.250 "copy": true, 00:28:18.250 "nvme_iov_md": false 00:28:18.250 }, 00:28:18.250 "memory_domains": [ 00:28:18.250 { 00:28:18.250 "dma_device_id": "system", 00:28:18.250 "dma_device_type": 1 00:28:18.250 }, 00:28:18.250 { 00:28:18.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.250 "dma_device_type": 2 00:28:18.250 } 00:28:18.250 ], 00:28:18.250 "driver_specific": { 00:28:18.250 "passthru": { 00:28:18.250 "name": "pt2", 00:28:18.250 "base_bdev_name": "malloc2" 00:28:18.250 } 00:28:18.250 } 00:28:18.250 }' 00:28:18.250 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.250 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.509 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:18.767 [2024-07-26 10:38:31.644227] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 39111a0e-ae6b-478e-82fd-26161b53b641 '!=' 39111a0e-ae6b-478e-82fd-26161b53b641 ']' 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:28:18.767 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:19.026 [2024-07-26 10:38:31.876710] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.026 10:38:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.285 10:38:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.285 "name": "raid_bdev1", 00:28:19.285 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:19.285 "strip_size_kb": 0, 00:28:19.285 "state": "online", 00:28:19.285 "raid_level": "raid1", 00:28:19.285 "superblock": true, 00:28:19.285 "num_base_bdevs": 2, 00:28:19.285 "num_base_bdevs_discovered": 1, 00:28:19.285 "num_base_bdevs_operational": 1, 00:28:19.285 "base_bdevs_list": [ 00:28:19.285 { 00:28:19.285 "name": null, 00:28:19.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.285 "is_configured": false, 00:28:19.285 "data_offset": 256, 00:28:19.285 "data_size": 7936 00:28:19.285 }, 00:28:19.285 { 00:28:19.285 "name": "pt2", 00:28:19.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:19.285 "is_configured": true, 00:28:19.285 "data_offset": 256, 00:28:19.285 "data_size": 7936 00:28:19.285 } 00:28:19.285 ] 00:28:19.285 }' 00:28:19.285 10:38:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.285 10:38:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:19.852 10:38:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:20.110 [2024-07-26 10:38:32.927446] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:20.110 [2024-07-26 10:38:32.927469] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:20.110 [2024-07-26 10:38:32.927514] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:20.110 [2024-07-26 10:38:32.927553] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:20.110 [2024-07-26 10:38:32.927564] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3f5c0 name raid_bdev1, state offline 00:28:20.110 10:38:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.110 10:38:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:28:20.368 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:28:20.368 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:28:20.368 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:20.368 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:20.368 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:28:20.627 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:20.885 [2024-07-26 10:38:33.617236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:20.885 [2024-07-26 10:38:33.617279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:20.885 [2024-07-26 10:38:33.617295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb7e20 00:28:20.885 [2024-07-26 10:38:33.617307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:20.885 [2024-07-26 10:38:33.618758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:20.885 [2024-07-26 10:38:33.618784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:20.885 [2024-07-26 10:38:33.618840] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:20.885 [2024-07-26 10:38:33.618864] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:20.885 [2024-07-26 10:38:33.618935] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3d720 00:28:20.885 [2024-07-26 10:38:33.618944] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:20.885 [2024-07-26 10:38:33.619094] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc05e20 00:28:20.885 [2024-07-26 10:38:33.619212] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3d720 00:28:20.885 [2024-07-26 10:38:33.619222] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd3d720 00:28:20.885 [2024-07-26 10:38:33.619309] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.885 pt2 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.885 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.144 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.144 "name": "raid_bdev1", 00:28:21.144 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:21.144 "strip_size_kb": 0, 00:28:21.144 "state": "online", 00:28:21.144 "raid_level": "raid1", 00:28:21.144 "superblock": true, 00:28:21.144 "num_base_bdevs": 2, 00:28:21.144 "num_base_bdevs_discovered": 1, 00:28:21.144 "num_base_bdevs_operational": 1, 00:28:21.144 "base_bdevs_list": [ 00:28:21.144 { 00:28:21.144 "name": null, 00:28:21.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.144 "is_configured": false, 00:28:21.144 "data_offset": 256, 00:28:21.144 "data_size": 7936 00:28:21.144 }, 00:28:21.144 { 00:28:21.144 "name": "pt2", 00:28:21.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:21.144 "is_configured": true, 00:28:21.144 "data_offset": 256, 00:28:21.144 "data_size": 7936 00:28:21.144 } 00:28:21.144 ] 00:28:21.144 }' 00:28:21.144 10:38:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.144 10:38:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:21.741 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:21.999 [2024-07-26 10:38:34.643917] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:21.999 [2024-07-26 10:38:34.643939] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:22.000 [2024-07-26 10:38:34.643988] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.000 [2024-07-26 10:38:34.644027] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.000 [2024-07-26 10:38:34.644037] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3d720 name raid_bdev1, state offline 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:28:22.000 10:38:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:22.258 [2024-07-26 10:38:35.105311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:22.258 [2024-07-26 10:38:35.105352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.258 [2024-07-26 10:38:35.105368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd3c3e0 00:28:22.258 [2024-07-26 10:38:35.105379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.258 [2024-07-26 10:38:35.106839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.258 [2024-07-26 10:38:35.106864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:22.258 [2024-07-26 10:38:35.106917] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:22.258 [2024-07-26 10:38:35.106938] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:22.258 [2024-07-26 10:38:35.107023] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:22.258 [2024-07-26 10:38:35.107034] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:22.258 [2024-07-26 10:38:35.107046] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3dff0 name raid_bdev1, state configuring 00:28:22.258 [2024-07-26 10:38:35.107065] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:22.258 [2024-07-26 10:38:35.107113] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd3ca80 00:28:22.258 [2024-07-26 10:38:35.107122] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:22.258 [2024-07-26 10:38:35.107275] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc1b7f0 00:28:22.258 [2024-07-26 10:38:35.107382] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd3ca80 00:28:22.258 [2024-07-26 10:38:35.107391] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd3ca80 00:28:22.258 [2024-07-26 10:38:35.107478] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.258 pt1 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.258 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.517 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.517 "name": "raid_bdev1", 00:28:22.517 "uuid": "39111a0e-ae6b-478e-82fd-26161b53b641", 00:28:22.517 "strip_size_kb": 0, 00:28:22.517 "state": "online", 00:28:22.517 "raid_level": "raid1", 00:28:22.517 "superblock": true, 00:28:22.517 "num_base_bdevs": 2, 00:28:22.517 "num_base_bdevs_discovered": 1, 00:28:22.517 "num_base_bdevs_operational": 1, 00:28:22.517 "base_bdevs_list": [ 00:28:22.517 { 00:28:22.517 "name": null, 00:28:22.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.517 "is_configured": false, 00:28:22.517 "data_offset": 256, 00:28:22.517 "data_size": 7936 00:28:22.517 }, 00:28:22.517 { 00:28:22.517 "name": "pt2", 00:28:22.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:22.517 "is_configured": true, 00:28:22.517 "data_offset": 256, 00:28:22.517 "data_size": 7936 00:28:22.517 } 00:28:22.517 ] 00:28:22.517 }' 00:28:22.517 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.517 10:38:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:23.085 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:23.085 10:38:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:23.344 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:28:23.344 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:23.344 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:28:23.604 [2024-07-26 10:38:36.376854] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 39111a0e-ae6b-478e-82fd-26161b53b641 '!=' 39111a0e-ae6b-478e-82fd-26161b53b641 ']' 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 3513015 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 3513015 ']' 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 3513015 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3513015 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3513015' 00:28:23.604 killing process with pid 3513015 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 3513015 00:28:23.604 [2024-07-26 10:38:36.457542] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:23.604 [2024-07-26 10:38:36.457591] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.604 [2024-07-26 10:38:36.457628] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.604 [2024-07-26 10:38:36.457643] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd3ca80 name raid_bdev1, state offline 00:28:23.604 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 3513015 00:28:23.604 [2024-07-26 10:38:36.472901] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:23.863 10:38:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:28:23.863 00:28:23.863 real 0m14.732s 00:28:23.864 user 0m26.680s 00:28:23.864 sys 0m2.799s 00:28:23.864 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.864 10:38:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:28:23.864 ************************************ 00:28:23.864 END TEST raid_superblock_test_4k 00:28:23.864 ************************************ 00:28:23.864 10:38:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:28:23.864 10:38:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:28:23.864 10:38:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:23.864 10:38:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.864 10:38:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:23.864 ************************************ 00:28:23.864 START TEST raid_rebuild_test_sb_4k 00:28:23.864 ************************************ 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=3515887 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 3515887 /var/tmp/spdk-raid.sock 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 3515887 ']' 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:23.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.864 10:38:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:24.123 [2024-07-26 10:38:36.802543] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:24.123 [2024-07-26 10:38:36.802603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515887 ] 00:28:24.123 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:24.123 Zero copy mechanism will not be used. 00:28:24.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.123 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:24.123 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:24.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:24.124 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:24.124 [2024-07-26 10:38:36.935866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.124 [2024-07-26 10:38:36.980359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.383 [2024-07-26 10:38:37.034689] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.383 [2024-07-26 10:38:37.034733] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.950 10:38:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.951 10:38:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:28:24.951 10:38:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:24.951 10:38:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:28:25.209 BaseBdev1_malloc 00:28:25.209 10:38:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:25.468 [2024-07-26 10:38:38.148898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:25.468 [2024-07-26 10:38:38.148940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.468 [2024-07-26 10:38:38.148961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x277c370 00:28:25.468 [2024-07-26 10:38:38.148973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.468 [2024-07-26 10:38:38.150402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.468 [2024-07-26 10:38:38.150429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:25.468 BaseBdev1 00:28:25.468 10:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:25.468 10:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:28:25.727 BaseBdev2_malloc 00:28:25.727 10:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:25.727 [2024-07-26 10:38:38.586395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:25.727 [2024-07-26 10:38:38.586435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.727 [2024-07-26 10:38:38.586453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27380d0 00:28:25.727 [2024-07-26 10:38:38.586464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.727 [2024-07-26 10:38:38.587852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.727 [2024-07-26 10:38:38.587878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:25.727 BaseBdev2 00:28:25.727 10:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:28:25.986 spare_malloc 00:28:25.986 10:38:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:26.245 spare_delay 00:28:26.245 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:26.505 [2024-07-26 10:38:39.268468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:26.505 [2024-07-26 10:38:39.268508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.505 [2024-07-26 10:38:39.268526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2726070 00:28:26.505 [2024-07-26 10:38:39.268537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.505 [2024-07-26 10:38:39.269905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.505 [2024-07-26 10:38:39.269932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:26.505 spare 00:28:26.505 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:26.765 [2024-07-26 10:38:39.489064] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:26.765 [2024-07-26 10:38:39.490212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:26.765 [2024-07-26 10:38:39.490344] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2727250 00:28:26.765 [2024-07-26 10:38:39.490356] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:26.765 [2024-07-26 10:38:39.490530] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25e14f0 00:28:26.765 [2024-07-26 10:38:39.490651] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2727250 00:28:26.765 [2024-07-26 10:38:39.490660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2727250 00:28:26.765 [2024-07-26 10:38:39.490759] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.765 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.024 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:27.024 "name": "raid_bdev1", 00:28:27.024 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:27.024 "strip_size_kb": 0, 00:28:27.024 "state": "online", 00:28:27.024 "raid_level": "raid1", 00:28:27.024 "superblock": true, 00:28:27.024 "num_base_bdevs": 2, 00:28:27.024 "num_base_bdevs_discovered": 2, 00:28:27.024 "num_base_bdevs_operational": 2, 00:28:27.024 "base_bdevs_list": [ 00:28:27.024 { 00:28:27.024 "name": "BaseBdev1", 00:28:27.024 "uuid": "f23f21b0-391f-583c-b7e8-d0c744522504", 00:28:27.024 "is_configured": true, 00:28:27.024 "data_offset": 256, 00:28:27.024 "data_size": 7936 00:28:27.024 }, 00:28:27.024 { 00:28:27.024 "name": "BaseBdev2", 00:28:27.025 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:27.025 "is_configured": true, 00:28:27.025 "data_offset": 256, 00:28:27.025 "data_size": 7936 00:28:27.025 } 00:28:27.025 ] 00:28:27.025 }' 00:28:27.025 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:27.025 10:38:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:27.593 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:27.593 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:27.852 [2024-07-26 10:38:40.536018] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:27.852 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:28:27.852 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.852 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:28.112 10:38:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:28.112 [2024-07-26 10:38:40.997048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:28.112 /dev/nbd0 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:28.371 1+0 records in 00:28:28.371 1+0 records out 00:28:28.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168572 s, 24.3 MB/s 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:28:28.371 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:28:28.940 7936+0 records in 00:28:28.940 7936+0 records out 00:28:28.940 32505856 bytes (33 MB, 31 MiB) copied, 0.677383 s, 48.0 MB/s 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:28.940 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:29.198 [2024-07-26 10:38:41.985220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:28:29.198 10:38:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:29.457 [2024-07-26 10:38:42.201794] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.457 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.715 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.715 "name": "raid_bdev1", 00:28:29.715 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:29.715 "strip_size_kb": 0, 00:28:29.715 "state": "online", 00:28:29.715 "raid_level": "raid1", 00:28:29.715 "superblock": true, 00:28:29.715 "num_base_bdevs": 2, 00:28:29.715 "num_base_bdevs_discovered": 1, 00:28:29.715 "num_base_bdevs_operational": 1, 00:28:29.715 "base_bdevs_list": [ 00:28:29.715 { 00:28:29.715 "name": null, 00:28:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.715 "is_configured": false, 00:28:29.715 "data_offset": 256, 00:28:29.715 "data_size": 7936 00:28:29.715 }, 00:28:29.716 { 00:28:29.716 "name": "BaseBdev2", 00:28:29.716 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:29.716 "is_configured": true, 00:28:29.716 "data_offset": 256, 00:28:29.716 "data_size": 7936 00:28:29.716 } 00:28:29.716 ] 00:28:29.716 }' 00:28:29.716 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.716 10:38:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:30.282 10:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:30.540 [2024-07-26 10:38:43.240755] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:30.540 [2024-07-26 10:38:43.245387] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:30.540 [2024-07-26 10:38:43.247406] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:30.540 10:38:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.474 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.732 "name": "raid_bdev1", 00:28:31.732 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:31.732 "strip_size_kb": 0, 00:28:31.732 "state": "online", 00:28:31.732 "raid_level": "raid1", 00:28:31.732 "superblock": true, 00:28:31.732 "num_base_bdevs": 2, 00:28:31.732 "num_base_bdevs_discovered": 2, 00:28:31.732 "num_base_bdevs_operational": 2, 00:28:31.732 "process": { 00:28:31.732 "type": "rebuild", 00:28:31.732 "target": "spare", 00:28:31.732 "progress": { 00:28:31.732 "blocks": 3072, 00:28:31.732 "percent": 38 00:28:31.732 } 00:28:31.732 }, 00:28:31.732 "base_bdevs_list": [ 00:28:31.732 { 00:28:31.732 "name": "spare", 00:28:31.732 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:31.732 "is_configured": true, 00:28:31.732 "data_offset": 256, 00:28:31.732 "data_size": 7936 00:28:31.732 }, 00:28:31.732 { 00:28:31.732 "name": "BaseBdev2", 00:28:31.732 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:31.732 "is_configured": true, 00:28:31.732 "data_offset": 256, 00:28:31.732 "data_size": 7936 00:28:31.732 } 00:28:31.732 ] 00:28:31.732 }' 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.732 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:31.991 [2024-07-26 10:38:44.789706] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.991 [2024-07-26 10:38:44.859183] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:31.991 [2024-07-26 10:38:44.859235] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.991 [2024-07-26 10:38:44.859250] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.991 [2024-07-26 10:38:44.859257] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.991 10:38:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.248 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.248 "name": "raid_bdev1", 00:28:32.248 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:32.248 "strip_size_kb": 0, 00:28:32.248 "state": "online", 00:28:32.248 "raid_level": "raid1", 00:28:32.248 "superblock": true, 00:28:32.248 "num_base_bdevs": 2, 00:28:32.249 "num_base_bdevs_discovered": 1, 00:28:32.249 "num_base_bdevs_operational": 1, 00:28:32.249 "base_bdevs_list": [ 00:28:32.249 { 00:28:32.249 "name": null, 00:28:32.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.249 "is_configured": false, 00:28:32.249 "data_offset": 256, 00:28:32.249 "data_size": 7936 00:28:32.249 }, 00:28:32.249 { 00:28:32.249 "name": "BaseBdev2", 00:28:32.249 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:32.249 "is_configured": true, 00:28:32.249 "data_offset": 256, 00:28:32.249 "data_size": 7936 00:28:32.249 } 00:28:32.249 ] 00:28:32.249 }' 00:28:32.249 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.249 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.815 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.073 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.073 "name": "raid_bdev1", 00:28:33.073 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:33.073 "strip_size_kb": 0, 00:28:33.073 "state": "online", 00:28:33.073 "raid_level": "raid1", 00:28:33.073 "superblock": true, 00:28:33.073 "num_base_bdevs": 2, 00:28:33.073 "num_base_bdevs_discovered": 1, 00:28:33.073 "num_base_bdevs_operational": 1, 00:28:33.073 "base_bdevs_list": [ 00:28:33.073 { 00:28:33.073 "name": null, 00:28:33.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.073 "is_configured": false, 00:28:33.073 "data_offset": 256, 00:28:33.074 "data_size": 7936 00:28:33.074 }, 00:28:33.074 { 00:28:33.074 "name": "BaseBdev2", 00:28:33.074 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:33.074 "is_configured": true, 00:28:33.074 "data_offset": 256, 00:28:33.074 "data_size": 7936 00:28:33.074 } 00:28:33.074 ] 00:28:33.074 }' 00:28:33.074 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:33.074 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:33.074 10:38:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.332 10:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:33.332 10:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:33.590 [2024-07-26 10:38:46.487758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.590 [2024-07-26 10:38:46.492460] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:33.848 [2024-07-26 10:38:46.493840] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:33.848 10:38:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.783 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.042 "name": "raid_bdev1", 00:28:35.042 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:35.042 "strip_size_kb": 0, 00:28:35.042 "state": "online", 00:28:35.042 "raid_level": "raid1", 00:28:35.042 "superblock": true, 00:28:35.042 "num_base_bdevs": 2, 00:28:35.042 "num_base_bdevs_discovered": 2, 00:28:35.042 "num_base_bdevs_operational": 2, 00:28:35.042 "process": { 00:28:35.042 "type": "rebuild", 00:28:35.042 "target": "spare", 00:28:35.042 "progress": { 00:28:35.042 "blocks": 3072, 00:28:35.042 "percent": 38 00:28:35.042 } 00:28:35.042 }, 00:28:35.042 "base_bdevs_list": [ 00:28:35.042 { 00:28:35.042 "name": "spare", 00:28:35.042 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:35.042 "is_configured": true, 00:28:35.042 "data_offset": 256, 00:28:35.042 "data_size": 7936 00:28:35.042 }, 00:28:35.042 { 00:28:35.042 "name": "BaseBdev2", 00:28:35.042 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:35.042 "is_configured": true, 00:28:35.042 "data_offset": 256, 00:28:35.042 "data_size": 7936 00:28:35.042 } 00:28:35.042 ] 00:28:35.042 }' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:35.042 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=971 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.042 10:38:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.300 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.300 "name": "raid_bdev1", 00:28:35.300 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:35.300 "strip_size_kb": 0, 00:28:35.300 "state": "online", 00:28:35.300 "raid_level": "raid1", 00:28:35.300 "superblock": true, 00:28:35.300 "num_base_bdevs": 2, 00:28:35.300 "num_base_bdevs_discovered": 2, 00:28:35.300 "num_base_bdevs_operational": 2, 00:28:35.300 "process": { 00:28:35.300 "type": "rebuild", 00:28:35.300 "target": "spare", 00:28:35.300 "progress": { 00:28:35.300 "blocks": 3840, 00:28:35.300 "percent": 48 00:28:35.300 } 00:28:35.300 }, 00:28:35.300 "base_bdevs_list": [ 00:28:35.300 { 00:28:35.300 "name": "spare", 00:28:35.300 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:35.300 "is_configured": true, 00:28:35.301 "data_offset": 256, 00:28:35.301 "data_size": 7936 00:28:35.301 }, 00:28:35.301 { 00:28:35.301 "name": "BaseBdev2", 00:28:35.301 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:35.301 "is_configured": true, 00:28:35.301 "data_offset": 256, 00:28:35.301 "data_size": 7936 00:28:35.301 } 00:28:35.301 ] 00:28:35.301 }' 00:28:35.301 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.301 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.301 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.301 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.301 10:38:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.714 "name": "raid_bdev1", 00:28:36.714 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:36.714 "strip_size_kb": 0, 00:28:36.714 "state": "online", 00:28:36.714 "raid_level": "raid1", 00:28:36.714 "superblock": true, 00:28:36.714 "num_base_bdevs": 2, 00:28:36.714 "num_base_bdevs_discovered": 2, 00:28:36.714 "num_base_bdevs_operational": 2, 00:28:36.714 "process": { 00:28:36.714 "type": "rebuild", 00:28:36.714 "target": "spare", 00:28:36.714 "progress": { 00:28:36.714 "blocks": 7168, 00:28:36.714 "percent": 90 00:28:36.714 } 00:28:36.714 }, 00:28:36.714 "base_bdevs_list": [ 00:28:36.714 { 00:28:36.714 "name": "spare", 00:28:36.714 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:36.714 "is_configured": true, 00:28:36.714 "data_offset": 256, 00:28:36.714 "data_size": 7936 00:28:36.714 }, 00:28:36.714 { 00:28:36.714 "name": "BaseBdev2", 00:28:36.714 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:36.714 "is_configured": true, 00:28:36.714 "data_offset": 256, 00:28:36.714 "data_size": 7936 00:28:36.714 } 00:28:36.714 ] 00:28:36.714 }' 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:36.714 10:38:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:36.714 [2024-07-26 10:38:49.616304] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:36.971 [2024-07-26 10:38:49.616360] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:36.971 [2024-07-26 10:38:49.616439] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.904 "name": "raid_bdev1", 00:28:37.904 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:37.904 "strip_size_kb": 0, 00:28:37.904 "state": "online", 00:28:37.904 "raid_level": "raid1", 00:28:37.904 "superblock": true, 00:28:37.904 "num_base_bdevs": 2, 00:28:37.904 "num_base_bdevs_discovered": 2, 00:28:37.904 "num_base_bdevs_operational": 2, 00:28:37.904 "base_bdevs_list": [ 00:28:37.904 { 00:28:37.904 "name": "spare", 00:28:37.904 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:37.904 "is_configured": true, 00:28:37.904 "data_offset": 256, 00:28:37.904 "data_size": 7936 00:28:37.904 }, 00:28:37.904 { 00:28:37.904 "name": "BaseBdev2", 00:28:37.904 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:37.904 "is_configured": true, 00:28:37.904 "data_offset": 256, 00:28:37.904 "data_size": 7936 00:28:37.904 } 00:28:37.904 ] 00:28:37.904 }' 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:37.904 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.162 10:38:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.728 "name": "raid_bdev1", 00:28:38.728 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:38.728 "strip_size_kb": 0, 00:28:38.728 "state": "online", 00:28:38.728 "raid_level": "raid1", 00:28:38.728 "superblock": true, 00:28:38.728 "num_base_bdevs": 2, 00:28:38.728 "num_base_bdevs_discovered": 2, 00:28:38.728 "num_base_bdevs_operational": 2, 00:28:38.728 "base_bdevs_list": [ 00:28:38.728 { 00:28:38.728 "name": "spare", 00:28:38.728 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:38.728 "is_configured": true, 00:28:38.728 "data_offset": 256, 00:28:38.728 "data_size": 7936 00:28:38.728 }, 00:28:38.728 { 00:28:38.728 "name": "BaseBdev2", 00:28:38.728 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:38.728 "is_configured": true, 00:28:38.728 "data_offset": 256, 00:28:38.728 "data_size": 7936 00:28:38.728 } 00:28:38.728 ] 00:28:38.728 }' 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.728 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.986 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.986 "name": "raid_bdev1", 00:28:38.986 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:38.986 "strip_size_kb": 0, 00:28:38.986 "state": "online", 00:28:38.986 "raid_level": "raid1", 00:28:38.986 "superblock": true, 00:28:38.986 "num_base_bdevs": 2, 00:28:38.986 "num_base_bdevs_discovered": 2, 00:28:38.986 "num_base_bdevs_operational": 2, 00:28:38.986 "base_bdevs_list": [ 00:28:38.986 { 00:28:38.986 "name": "spare", 00:28:38.986 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:38.986 "is_configured": true, 00:28:38.986 "data_offset": 256, 00:28:38.986 "data_size": 7936 00:28:38.986 }, 00:28:38.986 { 00:28:38.986 "name": "BaseBdev2", 00:28:38.986 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:38.986 "is_configured": true, 00:28:38.986 "data_offset": 256, 00:28:38.986 "data_size": 7936 00:28:38.986 } 00:28:38.986 ] 00:28:38.986 }' 00:28:38.986 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.986 10:38:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:39.552 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:39.552 [2024-07-26 10:38:52.419993] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:39.552 [2024-07-26 10:38:52.420019] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:39.552 [2024-07-26 10:38:52.420071] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:39.552 [2024-07-26 10:38:52.420121] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:39.552 [2024-07-26 10:38:52.420132] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2727250 name raid_bdev1, state offline 00:28:39.552 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:28:39.552 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:39.810 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:40.068 /dev/nbd0 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.068 1+0 records in 00:28:40.068 1+0 records out 00:28:40.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152315 s, 26.9 MB/s 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:40.068 10:38:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:40.327 /dev/nbd1 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.327 1+0 records in 00:28:40.327 1+0 records out 00:28:40.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297078 s, 13.8 MB/s 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:40.327 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:40.584 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:40.584 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:40.584 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:40.585 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:40.841 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:41.097 10:38:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:41.355 [2024-07-26 10:38:54.146173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:41.355 [2024-07-26 10:38:54.146215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.355 [2024-07-26 10:38:54.146233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2724b10 00:28:41.355 [2024-07-26 10:38:54.146245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.355 [2024-07-26 10:38:54.147800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.355 [2024-07-26 10:38:54.147827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:41.355 [2024-07-26 10:38:54.147897] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:41.355 [2024-07-26 10:38:54.147920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.355 [2024-07-26 10:38:54.148009] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:41.355 spare 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.355 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.355 [2024-07-26 10:38:54.248316] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2727d60 00:28:41.355 [2024-07-26 10:38:54.248331] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:41.355 [2024-07-26 10:38:54.248492] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:41.355 [2024-07-26 10:38:54.248621] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2727d60 00:28:41.355 [2024-07-26 10:38:54.248630] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2727d60 00:28:41.355 [2024-07-26 10:38:54.248722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.613 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.613 "name": "raid_bdev1", 00:28:41.613 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:41.613 "strip_size_kb": 0, 00:28:41.613 "state": "online", 00:28:41.613 "raid_level": "raid1", 00:28:41.613 "superblock": true, 00:28:41.613 "num_base_bdevs": 2, 00:28:41.613 "num_base_bdevs_discovered": 2, 00:28:41.613 "num_base_bdevs_operational": 2, 00:28:41.613 "base_bdevs_list": [ 00:28:41.613 { 00:28:41.613 "name": "spare", 00:28:41.613 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:41.613 "is_configured": true, 00:28:41.613 "data_offset": 256, 00:28:41.613 "data_size": 7936 00:28:41.613 }, 00:28:41.613 { 00:28:41.613 "name": "BaseBdev2", 00:28:41.613 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:41.613 "is_configured": true, 00:28:41.613 "data_offset": 256, 00:28:41.613 "data_size": 7936 00:28:41.613 } 00:28:41.613 ] 00:28:41.613 }' 00:28:41.613 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.613 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.178 10:38:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.437 "name": "raid_bdev1", 00:28:42.437 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:42.437 "strip_size_kb": 0, 00:28:42.437 "state": "online", 00:28:42.437 "raid_level": "raid1", 00:28:42.437 "superblock": true, 00:28:42.437 "num_base_bdevs": 2, 00:28:42.437 "num_base_bdevs_discovered": 2, 00:28:42.437 "num_base_bdevs_operational": 2, 00:28:42.437 "base_bdevs_list": [ 00:28:42.437 { 00:28:42.437 "name": "spare", 00:28:42.437 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:42.437 "is_configured": true, 00:28:42.437 "data_offset": 256, 00:28:42.437 "data_size": 7936 00:28:42.437 }, 00:28:42.437 { 00:28:42.437 "name": "BaseBdev2", 00:28:42.437 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:42.437 "is_configured": true, 00:28:42.437 "data_offset": 256, 00:28:42.437 "data_size": 7936 00:28:42.437 } 00:28:42.437 ] 00:28:42.437 }' 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.437 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:42.697 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.697 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:42.956 [2024-07-26 10:38:55.614242] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.956 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.214 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.214 "name": "raid_bdev1", 00:28:43.214 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:43.214 "strip_size_kb": 0, 00:28:43.214 "state": "online", 00:28:43.214 "raid_level": "raid1", 00:28:43.214 "superblock": true, 00:28:43.214 "num_base_bdevs": 2, 00:28:43.214 "num_base_bdevs_discovered": 1, 00:28:43.214 "num_base_bdevs_operational": 1, 00:28:43.214 "base_bdevs_list": [ 00:28:43.214 { 00:28:43.214 "name": null, 00:28:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.214 "is_configured": false, 00:28:43.214 "data_offset": 256, 00:28:43.214 "data_size": 7936 00:28:43.214 }, 00:28:43.214 { 00:28:43.214 "name": "BaseBdev2", 00:28:43.214 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:43.214 "is_configured": true, 00:28:43.214 "data_offset": 256, 00:28:43.214 "data_size": 7936 00:28:43.214 } 00:28:43.214 ] 00:28:43.214 }' 00:28:43.214 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.214 10:38:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:43.782 10:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:43.782 [2024-07-26 10:38:56.648986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:43.782 [2024-07-26 10:38:56.649117] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:43.782 [2024-07-26 10:38:56.649132] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:43.782 [2024-07-26 10:38:56.649162] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:43.782 [2024-07-26 10:38:56.653649] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:43.782 [2024-07-26 10:38:56.655620] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:43.782 10:38:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.158 "name": "raid_bdev1", 00:28:45.158 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:45.158 "strip_size_kb": 0, 00:28:45.158 "state": "online", 00:28:45.158 "raid_level": "raid1", 00:28:45.158 "superblock": true, 00:28:45.158 "num_base_bdevs": 2, 00:28:45.158 "num_base_bdevs_discovered": 2, 00:28:45.158 "num_base_bdevs_operational": 2, 00:28:45.158 "process": { 00:28:45.158 "type": "rebuild", 00:28:45.158 "target": "spare", 00:28:45.158 "progress": { 00:28:45.158 "blocks": 3072, 00:28:45.158 "percent": 38 00:28:45.158 } 00:28:45.158 }, 00:28:45.158 "base_bdevs_list": [ 00:28:45.158 { 00:28:45.158 "name": "spare", 00:28:45.158 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:45.158 "is_configured": true, 00:28:45.158 "data_offset": 256, 00:28:45.158 "data_size": 7936 00:28:45.158 }, 00:28:45.158 { 00:28:45.158 "name": "BaseBdev2", 00:28:45.158 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:45.158 "is_configured": true, 00:28:45.158 "data_offset": 256, 00:28:45.158 "data_size": 7936 00:28:45.158 } 00:28:45.158 ] 00:28:45.158 }' 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.158 10:38:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:45.418 [2024-07-26 10:38:58.206285] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:45.418 [2024-07-26 10:38:58.267240] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:45.418 [2024-07-26 10:38:58.267281] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.418 [2024-07-26 10:38:58.267295] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:45.418 [2024-07-26 10:38:58.267303] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.418 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.677 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.677 "name": "raid_bdev1", 00:28:45.677 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:45.677 "strip_size_kb": 0, 00:28:45.677 "state": "online", 00:28:45.677 "raid_level": "raid1", 00:28:45.677 "superblock": true, 00:28:45.677 "num_base_bdevs": 2, 00:28:45.677 "num_base_bdevs_discovered": 1, 00:28:45.677 "num_base_bdevs_operational": 1, 00:28:45.677 "base_bdevs_list": [ 00:28:45.677 { 00:28:45.677 "name": null, 00:28:45.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.677 "is_configured": false, 00:28:45.677 "data_offset": 256, 00:28:45.677 "data_size": 7936 00:28:45.677 }, 00:28:45.677 { 00:28:45.677 "name": "BaseBdev2", 00:28:45.677 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:45.677 "is_configured": true, 00:28:45.677 "data_offset": 256, 00:28:45.677 "data_size": 7936 00:28:45.677 } 00:28:45.677 ] 00:28:45.677 }' 00:28:45.677 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.677 10:38:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:46.245 10:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:46.504 [2024-07-26 10:38:59.310069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:46.504 [2024-07-26 10:38:59.310115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.504 [2024-07-26 10:38:59.310135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2778fd0 00:28:46.504 [2024-07-26 10:38:59.310152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.504 [2024-07-26 10:38:59.310484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.504 [2024-07-26 10:38:59.310500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:46.504 [2024-07-26 10:38:59.310571] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:46.504 [2024-07-26 10:38:59.310582] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:46.504 [2024-07-26 10:38:59.310598] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:46.504 [2024-07-26 10:38:59.310615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:46.504 [2024-07-26 10:38:59.315152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2723900 00:28:46.504 spare 00:28:46.504 [2024-07-26 10:38:59.316521] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:46.504 10:38:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.442 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.700 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:47.700 "name": "raid_bdev1", 00:28:47.700 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:47.700 "strip_size_kb": 0, 00:28:47.700 "state": "online", 00:28:47.700 "raid_level": "raid1", 00:28:47.700 "superblock": true, 00:28:47.700 "num_base_bdevs": 2, 00:28:47.700 "num_base_bdevs_discovered": 2, 00:28:47.700 "num_base_bdevs_operational": 2, 00:28:47.700 "process": { 00:28:47.700 "type": "rebuild", 00:28:47.700 "target": "spare", 00:28:47.700 "progress": { 00:28:47.700 "blocks": 3072, 00:28:47.700 "percent": 38 00:28:47.700 } 00:28:47.700 }, 00:28:47.700 "base_bdevs_list": [ 00:28:47.700 { 00:28:47.700 "name": "spare", 00:28:47.700 "uuid": "1bad2960-8289-570c-912e-076263b125f7", 00:28:47.700 "is_configured": true, 00:28:47.700 "data_offset": 256, 00:28:47.700 "data_size": 7936 00:28:47.700 }, 00:28:47.700 { 00:28:47.700 "name": "BaseBdev2", 00:28:47.700 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:47.700 "is_configured": true, 00:28:47.700 "data_offset": 256, 00:28:47.700 "data_size": 7936 00:28:47.700 } 00:28:47.700 ] 00:28:47.700 }' 00:28:47.700 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:47.959 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:47.959 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:47.959 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:47.959 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:47.959 [2024-07-26 10:39:00.859976] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:48.218 [2024-07-26 10:39:00.928120] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:48.218 [2024-07-26 10:39:00.928165] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:48.219 [2024-07-26 10:39:00.928179] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:48.219 [2024-07-26 10:39:00.928187] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.219 10:39:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.478 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.478 "name": "raid_bdev1", 00:28:48.478 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:48.478 "strip_size_kb": 0, 00:28:48.478 "state": "online", 00:28:48.478 "raid_level": "raid1", 00:28:48.478 "superblock": true, 00:28:48.478 "num_base_bdevs": 2, 00:28:48.478 "num_base_bdevs_discovered": 1, 00:28:48.478 "num_base_bdevs_operational": 1, 00:28:48.478 "base_bdevs_list": [ 00:28:48.478 { 00:28:48.478 "name": null, 00:28:48.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.478 "is_configured": false, 00:28:48.478 "data_offset": 256, 00:28:48.478 "data_size": 7936 00:28:48.478 }, 00:28:48.478 { 00:28:48.478 "name": "BaseBdev2", 00:28:48.478 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:48.478 "is_configured": true, 00:28:48.478 "data_offset": 256, 00:28:48.478 "data_size": 7936 00:28:48.478 } 00:28:48.478 ] 00:28:48.478 }' 00:28:48.478 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.478 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.046 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.305 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.305 "name": "raid_bdev1", 00:28:49.305 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:49.305 "strip_size_kb": 0, 00:28:49.305 "state": "online", 00:28:49.305 "raid_level": "raid1", 00:28:49.305 "superblock": true, 00:28:49.305 "num_base_bdevs": 2, 00:28:49.305 "num_base_bdevs_discovered": 1, 00:28:49.306 "num_base_bdevs_operational": 1, 00:28:49.306 "base_bdevs_list": [ 00:28:49.306 { 00:28:49.306 "name": null, 00:28:49.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.306 "is_configured": false, 00:28:49.306 "data_offset": 256, 00:28:49.306 "data_size": 7936 00:28:49.306 }, 00:28:49.306 { 00:28:49.306 "name": "BaseBdev2", 00:28:49.306 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:49.306 "is_configured": true, 00:28:49.306 "data_offset": 256, 00:28:49.306 "data_size": 7936 00:28:49.306 } 00:28:49.306 ] 00:28:49.306 }' 00:28:49.306 10:39:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.306 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:49.306 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.306 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:49.306 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:49.565 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:49.824 [2024-07-26 10:39:02.508567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:49.824 [2024-07-26 10:39:02.508608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:49.824 [2024-07-26 10:39:02.508634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25cb9c0 00:28:49.824 [2024-07-26 10:39:02.508646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:49.824 [2024-07-26 10:39:02.508947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:49.824 [2024-07-26 10:39:02.508962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:49.824 [2024-07-26 10:39:02.509020] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:49.824 [2024-07-26 10:39:02.509031] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:49.824 [2024-07-26 10:39:02.509040] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:49.824 BaseBdev1 00:28:49.824 10:39:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.761 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.054 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.054 "name": "raid_bdev1", 00:28:51.054 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:51.054 "strip_size_kb": 0, 00:28:51.054 "state": "online", 00:28:51.054 "raid_level": "raid1", 00:28:51.054 "superblock": true, 00:28:51.054 "num_base_bdevs": 2, 00:28:51.054 "num_base_bdevs_discovered": 1, 00:28:51.054 "num_base_bdevs_operational": 1, 00:28:51.054 "base_bdevs_list": [ 00:28:51.054 { 00:28:51.054 "name": null, 00:28:51.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.054 "is_configured": false, 00:28:51.054 "data_offset": 256, 00:28:51.054 "data_size": 7936 00:28:51.054 }, 00:28:51.054 { 00:28:51.054 "name": "BaseBdev2", 00:28:51.054 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:51.054 "is_configured": true, 00:28:51.054 "data_offset": 256, 00:28:51.054 "data_size": 7936 00:28:51.054 } 00:28:51.054 ] 00:28:51.054 }' 00:28:51.054 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.054 10:39:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.652 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.911 "name": "raid_bdev1", 00:28:51.911 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:51.911 "strip_size_kb": 0, 00:28:51.911 "state": "online", 00:28:51.911 "raid_level": "raid1", 00:28:51.911 "superblock": true, 00:28:51.911 "num_base_bdevs": 2, 00:28:51.911 "num_base_bdevs_discovered": 1, 00:28:51.911 "num_base_bdevs_operational": 1, 00:28:51.911 "base_bdevs_list": [ 00:28:51.911 { 00:28:51.911 "name": null, 00:28:51.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.911 "is_configured": false, 00:28:51.911 "data_offset": 256, 00:28:51.911 "data_size": 7936 00:28:51.911 }, 00:28:51.911 { 00:28:51.911 "name": "BaseBdev2", 00:28:51.911 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:51.911 "is_configured": true, 00:28:51.911 "data_offset": 256, 00:28:51.911 "data_size": 7936 00:28:51.911 } 00:28:51.911 ] 00:28:51.911 }' 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:51.911 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:52.170 [2024-07-26 10:39:04.874848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:52.170 [2024-07-26 10:39:04.874961] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:52.170 [2024-07-26 10:39:04.874975] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:52.170 request: 00:28:52.170 { 00:28:52.170 "base_bdev": "BaseBdev1", 00:28:52.170 "raid_bdev": "raid_bdev1", 00:28:52.170 "method": "bdev_raid_add_base_bdev", 00:28:52.170 "req_id": 1 00:28:52.170 } 00:28:52.170 Got JSON-RPC error response 00:28:52.170 response: 00:28:52.170 { 00:28:52.170 "code": -22, 00:28:52.170 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:52.170 } 00:28:52.170 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:28:52.170 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:52.170 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:52.170 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:52.170 10:39:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.108 10:39:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.367 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:53.367 "name": "raid_bdev1", 00:28:53.367 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:53.367 "strip_size_kb": 0, 00:28:53.367 "state": "online", 00:28:53.367 "raid_level": "raid1", 00:28:53.367 "superblock": true, 00:28:53.367 "num_base_bdevs": 2, 00:28:53.367 "num_base_bdevs_discovered": 1, 00:28:53.367 "num_base_bdevs_operational": 1, 00:28:53.367 "base_bdevs_list": [ 00:28:53.367 { 00:28:53.367 "name": null, 00:28:53.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.367 "is_configured": false, 00:28:53.367 "data_offset": 256, 00:28:53.367 "data_size": 7936 00:28:53.367 }, 00:28:53.367 { 00:28:53.367 "name": "BaseBdev2", 00:28:53.367 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:53.367 "is_configured": true, 00:28:53.367 "data_offset": 256, 00:28:53.367 "data_size": 7936 00:28:53.367 } 00:28:53.367 ] 00:28:53.367 }' 00:28:53.367 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:53.367 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.936 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.195 "name": "raid_bdev1", 00:28:54.195 "uuid": "78ebf0d7-22ee-44e3-929b-3d0a3dbb9f3c", 00:28:54.195 "strip_size_kb": 0, 00:28:54.195 "state": "online", 00:28:54.195 "raid_level": "raid1", 00:28:54.195 "superblock": true, 00:28:54.195 "num_base_bdevs": 2, 00:28:54.195 "num_base_bdevs_discovered": 1, 00:28:54.195 "num_base_bdevs_operational": 1, 00:28:54.195 "base_bdevs_list": [ 00:28:54.195 { 00:28:54.195 "name": null, 00:28:54.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.195 "is_configured": false, 00:28:54.195 "data_offset": 256, 00:28:54.195 "data_size": 7936 00:28:54.195 }, 00:28:54.195 { 00:28:54.195 "name": "BaseBdev2", 00:28:54.195 "uuid": "32706619-6805-5d2c-9f49-97f2c5756077", 00:28:54.195 "is_configured": true, 00:28:54.195 "data_offset": 256, 00:28:54.195 "data_size": 7936 00:28:54.195 } 00:28:54.195 ] 00:28:54.195 }' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 3515887 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 3515887 ']' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 3515887 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.195 10:39:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3515887 00:28:54.195 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:54.195 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:54.196 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3515887' 00:28:54.196 killing process with pid 3515887 00:28:54.196 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 3515887 00:28:54.196 Received shutdown signal, test time was about 60.000000 seconds 00:28:54.196 00:28:54.196 Latency(us) 00:28:54.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.196 =================================================================================================================== 00:28:54.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:54.196 [2024-07-26 10:39:07.035216] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:54.196 [2024-07-26 10:39:07.035296] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:54.196 [2024-07-26 10:39:07.035340] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:54.196 [2024-07-26 10:39:07.035352] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2727d60 name raid_bdev1, state offline 00:28:54.196 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 3515887 00:28:54.196 [2024-07-26 10:39:07.059516] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:54.456 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:28:54.456 00:28:54.456 real 0m30.495s 00:28:54.456 user 0m47.405s 00:28:54.456 sys 0m4.905s 00:28:54.456 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.456 10:39:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:28:54.456 ************************************ 00:28:54.456 END TEST raid_rebuild_test_sb_4k 00:28:54.456 ************************************ 00:28:54.456 10:39:07 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:28:54.456 10:39:07 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:28:54.456 10:39:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:54.456 10:39:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.456 10:39:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:54.456 ************************************ 00:28:54.456 START TEST raid_state_function_test_sb_md_separate 00:28:54.456 ************************************ 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=3521910 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3521910' 00:28:54.456 Process raid pid: 3521910 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 3521910 /var/tmp/spdk-raid.sock 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3521910 ']' 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:54.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.456 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:54.717 [2024-07-26 10:39:07.361392] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:28:54.717 [2024-07-26 10:39:07.361435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.717 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.717 [2024-07-26 10:39:07.482027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.717 [2024-07-26 10:39:07.525572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.717 [2024-07-26 10:39:07.577219] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:54.717 [2024-07-26 10:39:07.577243] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:54.717 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.717 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:28:54.717 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:54.976 [2024-07-26 10:39:07.826954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:54.976 [2024-07-26 10:39:07.826996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:54.976 [2024-07-26 10:39:07.827005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:54.976 [2024-07-26 10:39:07.827016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.976 10:39:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:55.235 10:39:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.236 "name": "Existed_Raid", 00:28:55.236 "uuid": "380fed42-4926-42c2-bc2a-7b0da8f9d28e", 00:28:55.236 "strip_size_kb": 0, 00:28:55.236 "state": "configuring", 00:28:55.236 "raid_level": "raid1", 00:28:55.236 "superblock": true, 00:28:55.236 "num_base_bdevs": 2, 00:28:55.236 "num_base_bdevs_discovered": 0, 00:28:55.236 "num_base_bdevs_operational": 2, 00:28:55.236 "base_bdevs_list": [ 00:28:55.236 { 00:28:55.236 "name": "BaseBdev1", 00:28:55.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.236 "is_configured": false, 00:28:55.236 "data_offset": 0, 00:28:55.236 "data_size": 0 00:28:55.236 }, 00:28:55.236 { 00:28:55.236 "name": "BaseBdev2", 00:28:55.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.236 "is_configured": false, 00:28:55.236 "data_offset": 0, 00:28:55.236 "data_size": 0 00:28:55.236 } 00:28:55.236 ] 00:28:55.236 }' 00:28:55.236 10:39:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.236 10:39:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:55.804 10:39:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:56.063 [2024-07-26 10:39:08.885613] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:56.063 [2024-07-26 10:39:08.885646] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x138cce0 name Existed_Raid, state configuring 00:28:56.063 10:39:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:56.322 [2024-07-26 10:39:09.114225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:56.322 [2024-07-26 10:39:09.114248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:56.322 [2024-07-26 10:39:09.114257] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:56.322 [2024-07-26 10:39:09.114267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:56.322 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:28:56.581 [2024-07-26 10:39:09.352897] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:56.581 BaseBdev1 00:28:56.581 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:56.581 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:56.581 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:56.581 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:28:56.581 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:56.582 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:56.582 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:56.841 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:57.100 [ 00:28:57.100 { 00:28:57.100 "name": "BaseBdev1", 00:28:57.100 "aliases": [ 00:28:57.100 "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e" 00:28:57.100 ], 00:28:57.100 "product_name": "Malloc disk", 00:28:57.100 "block_size": 4096, 00:28:57.100 "num_blocks": 8192, 00:28:57.100 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:28:57.100 "md_size": 32, 00:28:57.100 "md_interleave": false, 00:28:57.100 "dif_type": 0, 00:28:57.100 "assigned_rate_limits": { 00:28:57.100 "rw_ios_per_sec": 0, 00:28:57.100 "rw_mbytes_per_sec": 0, 00:28:57.100 "r_mbytes_per_sec": 0, 00:28:57.100 "w_mbytes_per_sec": 0 00:28:57.100 }, 00:28:57.100 "claimed": true, 00:28:57.100 "claim_type": "exclusive_write", 00:28:57.100 "zoned": false, 00:28:57.100 "supported_io_types": { 00:28:57.100 "read": true, 00:28:57.100 "write": true, 00:28:57.100 "unmap": true, 00:28:57.100 "flush": true, 00:28:57.100 "reset": true, 00:28:57.100 "nvme_admin": false, 00:28:57.100 "nvme_io": false, 00:28:57.100 "nvme_io_md": false, 00:28:57.100 "write_zeroes": true, 00:28:57.100 "zcopy": true, 00:28:57.100 "get_zone_info": false, 00:28:57.100 "zone_management": false, 00:28:57.100 "zone_append": false, 00:28:57.100 "compare": false, 00:28:57.100 "compare_and_write": false, 00:28:57.100 "abort": true, 00:28:57.100 "seek_hole": false, 00:28:57.100 "seek_data": false, 00:28:57.100 "copy": true, 00:28:57.100 "nvme_iov_md": false 00:28:57.100 }, 00:28:57.100 "memory_domains": [ 00:28:57.100 { 00:28:57.100 "dma_device_id": "system", 00:28:57.100 "dma_device_type": 1 00:28:57.100 }, 00:28:57.100 { 00:28:57.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.100 "dma_device_type": 2 00:28:57.100 } 00:28:57.100 ], 00:28:57.100 "driver_specific": {} 00:28:57.100 } 00:28:57.100 ] 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.100 10:39:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:57.359 10:39:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.360 "name": "Existed_Raid", 00:28:57.360 "uuid": "a327c8ca-5b5f-48f6-b0f6-ad28ba33fdf2", 00:28:57.360 "strip_size_kb": 0, 00:28:57.360 "state": "configuring", 00:28:57.360 "raid_level": "raid1", 00:28:57.360 "superblock": true, 00:28:57.360 "num_base_bdevs": 2, 00:28:57.360 "num_base_bdevs_discovered": 1, 00:28:57.360 "num_base_bdevs_operational": 2, 00:28:57.360 "base_bdevs_list": [ 00:28:57.360 { 00:28:57.360 "name": "BaseBdev1", 00:28:57.360 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:28:57.360 "is_configured": true, 00:28:57.360 "data_offset": 256, 00:28:57.360 "data_size": 7936 00:28:57.360 }, 00:28:57.360 { 00:28:57.360 "name": "BaseBdev2", 00:28:57.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.360 "is_configured": false, 00:28:57.360 "data_offset": 0, 00:28:57.360 "data_size": 0 00:28:57.360 } 00:28:57.360 ] 00:28:57.360 }' 00:28:57.360 10:39:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.360 10:39:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:57.928 10:39:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:57.928 [2024-07-26 10:39:10.824807] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:57.928 [2024-07-26 10:39:10.824843] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x138c610 name Existed_Raid, state configuring 00:28:58.187 10:39:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:28:58.187 [2024-07-26 10:39:11.037398] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:58.187 [2024-07-26 10:39:11.038696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:58.187 [2024-07-26 10:39:11.038728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.187 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:58.446 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.446 "name": "Existed_Raid", 00:28:58.446 "uuid": "621b8199-abd5-4f5f-a8d9-293e4b795650", 00:28:58.447 "strip_size_kb": 0, 00:28:58.447 "state": "configuring", 00:28:58.447 "raid_level": "raid1", 00:28:58.447 "superblock": true, 00:28:58.447 "num_base_bdevs": 2, 00:28:58.447 "num_base_bdevs_discovered": 1, 00:28:58.447 "num_base_bdevs_operational": 2, 00:28:58.447 "base_bdevs_list": [ 00:28:58.447 { 00:28:58.447 "name": "BaseBdev1", 00:28:58.447 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:28:58.447 "is_configured": true, 00:28:58.447 "data_offset": 256, 00:28:58.447 "data_size": 7936 00:28:58.447 }, 00:28:58.447 { 00:28:58.447 "name": "BaseBdev2", 00:28:58.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.447 "is_configured": false, 00:28:58.447 "data_offset": 0, 00:28:58.447 "data_size": 0 00:28:58.447 } 00:28:58.447 ] 00:28:58.447 }' 00:28:58.447 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.447 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:59.015 10:39:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:28:59.275 [2024-07-26 10:39:12.075826] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:59.275 [2024-07-26 10:39:12.075953] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15287d0 00:28:59.275 [2024-07-26 10:39:12.075965] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:59.275 [2024-07-26 10:39:12.076021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1529c10 00:28:59.275 [2024-07-26 10:39:12.076102] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15287d0 00:28:59.275 [2024-07-26 10:39:12.076111] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15287d0 00:28:59.275 [2024-07-26 10:39:12.076179] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.275 BaseBdev2 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:59.275 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:59.534 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:59.794 [ 00:28:59.794 { 00:28:59.794 "name": "BaseBdev2", 00:28:59.794 "aliases": [ 00:28:59.794 "f6287721-522c-46d8-a23a-70f4810f4b2e" 00:28:59.794 ], 00:28:59.794 "product_name": "Malloc disk", 00:28:59.794 "block_size": 4096, 00:28:59.794 "num_blocks": 8192, 00:28:59.794 "uuid": "f6287721-522c-46d8-a23a-70f4810f4b2e", 00:28:59.794 "md_size": 32, 00:28:59.794 "md_interleave": false, 00:28:59.794 "dif_type": 0, 00:28:59.794 "assigned_rate_limits": { 00:28:59.794 "rw_ios_per_sec": 0, 00:28:59.794 "rw_mbytes_per_sec": 0, 00:28:59.794 "r_mbytes_per_sec": 0, 00:28:59.794 "w_mbytes_per_sec": 0 00:28:59.794 }, 00:28:59.794 "claimed": true, 00:28:59.794 "claim_type": "exclusive_write", 00:28:59.794 "zoned": false, 00:28:59.794 "supported_io_types": { 00:28:59.794 "read": true, 00:28:59.794 "write": true, 00:28:59.794 "unmap": true, 00:28:59.794 "flush": true, 00:28:59.794 "reset": true, 00:28:59.794 "nvme_admin": false, 00:28:59.794 "nvme_io": false, 00:28:59.794 "nvme_io_md": false, 00:28:59.794 "write_zeroes": true, 00:28:59.794 "zcopy": true, 00:28:59.794 "get_zone_info": false, 00:28:59.794 "zone_management": false, 00:28:59.794 "zone_append": false, 00:28:59.794 "compare": false, 00:28:59.794 "compare_and_write": false, 00:28:59.794 "abort": true, 00:28:59.794 "seek_hole": false, 00:28:59.794 "seek_data": false, 00:28:59.794 "copy": true, 00:28:59.794 "nvme_iov_md": false 00:28:59.794 }, 00:28:59.794 "memory_domains": [ 00:28:59.794 { 00:28:59.794 "dma_device_id": "system", 00:28:59.794 "dma_device_type": 1 00:28:59.794 }, 00:28:59.794 { 00:28:59.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.794 "dma_device_type": 2 00:28:59.794 } 00:28:59.794 ], 00:28:59.794 "driver_specific": {} 00:28:59.794 } 00:28:59.794 ] 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.794 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:00.054 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.054 "name": "Existed_Raid", 00:29:00.054 "uuid": "621b8199-abd5-4f5f-a8d9-293e4b795650", 00:29:00.054 "strip_size_kb": 0, 00:29:00.054 "state": "online", 00:29:00.054 "raid_level": "raid1", 00:29:00.054 "superblock": true, 00:29:00.054 "num_base_bdevs": 2, 00:29:00.054 "num_base_bdevs_discovered": 2, 00:29:00.054 "num_base_bdevs_operational": 2, 00:29:00.054 "base_bdevs_list": [ 00:29:00.054 { 00:29:00.054 "name": "BaseBdev1", 00:29:00.054 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:29:00.054 "is_configured": true, 00:29:00.054 "data_offset": 256, 00:29:00.054 "data_size": 7936 00:29:00.054 }, 00:29:00.054 { 00:29:00.054 "name": "BaseBdev2", 00:29:00.054 "uuid": "f6287721-522c-46d8-a23a-70f4810f4b2e", 00:29:00.054 "is_configured": true, 00:29:00.054 "data_offset": 256, 00:29:00.054 "data_size": 7936 00:29:00.054 } 00:29:00.054 ] 00:29:00.054 }' 00:29:00.054 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.054 10:39:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:00.620 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:00.879 [2024-07-26 10:39:13.556091] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:00.879 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:00.879 "name": "Existed_Raid", 00:29:00.879 "aliases": [ 00:29:00.880 "621b8199-abd5-4f5f-a8d9-293e4b795650" 00:29:00.880 ], 00:29:00.880 "product_name": "Raid Volume", 00:29:00.880 "block_size": 4096, 00:29:00.880 "num_blocks": 7936, 00:29:00.880 "uuid": "621b8199-abd5-4f5f-a8d9-293e4b795650", 00:29:00.880 "md_size": 32, 00:29:00.880 "md_interleave": false, 00:29:00.880 "dif_type": 0, 00:29:00.880 "assigned_rate_limits": { 00:29:00.880 "rw_ios_per_sec": 0, 00:29:00.880 "rw_mbytes_per_sec": 0, 00:29:00.880 "r_mbytes_per_sec": 0, 00:29:00.880 "w_mbytes_per_sec": 0 00:29:00.880 }, 00:29:00.880 "claimed": false, 00:29:00.880 "zoned": false, 00:29:00.880 "supported_io_types": { 00:29:00.880 "read": true, 00:29:00.880 "write": true, 00:29:00.880 "unmap": false, 00:29:00.880 "flush": false, 00:29:00.880 "reset": true, 00:29:00.880 "nvme_admin": false, 00:29:00.880 "nvme_io": false, 00:29:00.880 "nvme_io_md": false, 00:29:00.880 "write_zeroes": true, 00:29:00.880 "zcopy": false, 00:29:00.880 "get_zone_info": false, 00:29:00.880 "zone_management": false, 00:29:00.880 "zone_append": false, 00:29:00.880 "compare": false, 00:29:00.880 "compare_and_write": false, 00:29:00.880 "abort": false, 00:29:00.880 "seek_hole": false, 00:29:00.880 "seek_data": false, 00:29:00.880 "copy": false, 00:29:00.880 "nvme_iov_md": false 00:29:00.880 }, 00:29:00.880 "memory_domains": [ 00:29:00.880 { 00:29:00.880 "dma_device_id": "system", 00:29:00.880 "dma_device_type": 1 00:29:00.880 }, 00:29:00.880 { 00:29:00.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.880 "dma_device_type": 2 00:29:00.880 }, 00:29:00.880 { 00:29:00.880 "dma_device_id": "system", 00:29:00.880 "dma_device_type": 1 00:29:00.880 }, 00:29:00.880 { 00:29:00.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.880 "dma_device_type": 2 00:29:00.880 } 00:29:00.880 ], 00:29:00.880 "driver_specific": { 00:29:00.880 "raid": { 00:29:00.880 "uuid": "621b8199-abd5-4f5f-a8d9-293e4b795650", 00:29:00.880 "strip_size_kb": 0, 00:29:00.880 "state": "online", 00:29:00.880 "raid_level": "raid1", 00:29:00.880 "superblock": true, 00:29:00.880 "num_base_bdevs": 2, 00:29:00.880 "num_base_bdevs_discovered": 2, 00:29:00.880 "num_base_bdevs_operational": 2, 00:29:00.880 "base_bdevs_list": [ 00:29:00.880 { 00:29:00.880 "name": "BaseBdev1", 00:29:00.880 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:29:00.880 "is_configured": true, 00:29:00.880 "data_offset": 256, 00:29:00.880 "data_size": 7936 00:29:00.880 }, 00:29:00.880 { 00:29:00.880 "name": "BaseBdev2", 00:29:00.880 "uuid": "f6287721-522c-46d8-a23a-70f4810f4b2e", 00:29:00.880 "is_configured": true, 00:29:00.880 "data_offset": 256, 00:29:00.880 "data_size": 7936 00:29:00.880 } 00:29:00.880 ] 00:29:00.880 } 00:29:00.880 } 00:29:00.880 }' 00:29:00.880 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:00.880 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:29:00.880 BaseBdev2' 00:29:00.880 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:00.880 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:29:00.880 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:01.138 "name": "BaseBdev1", 00:29:01.138 "aliases": [ 00:29:01.138 "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e" 00:29:01.138 ], 00:29:01.138 "product_name": "Malloc disk", 00:29:01.138 "block_size": 4096, 00:29:01.138 "num_blocks": 8192, 00:29:01.138 "uuid": "3a88d80f-6be0-488c-b5a2-3f4a0f799e4e", 00:29:01.138 "md_size": 32, 00:29:01.138 "md_interleave": false, 00:29:01.138 "dif_type": 0, 00:29:01.138 "assigned_rate_limits": { 00:29:01.138 "rw_ios_per_sec": 0, 00:29:01.138 "rw_mbytes_per_sec": 0, 00:29:01.138 "r_mbytes_per_sec": 0, 00:29:01.138 "w_mbytes_per_sec": 0 00:29:01.138 }, 00:29:01.138 "claimed": true, 00:29:01.138 "claim_type": "exclusive_write", 00:29:01.138 "zoned": false, 00:29:01.138 "supported_io_types": { 00:29:01.138 "read": true, 00:29:01.138 "write": true, 00:29:01.138 "unmap": true, 00:29:01.138 "flush": true, 00:29:01.138 "reset": true, 00:29:01.138 "nvme_admin": false, 00:29:01.138 "nvme_io": false, 00:29:01.138 "nvme_io_md": false, 00:29:01.138 "write_zeroes": true, 00:29:01.138 "zcopy": true, 00:29:01.138 "get_zone_info": false, 00:29:01.138 "zone_management": false, 00:29:01.138 "zone_append": false, 00:29:01.138 "compare": false, 00:29:01.138 "compare_and_write": false, 00:29:01.138 "abort": true, 00:29:01.138 "seek_hole": false, 00:29:01.138 "seek_data": false, 00:29:01.138 "copy": true, 00:29:01.138 "nvme_iov_md": false 00:29:01.138 }, 00:29:01.138 "memory_domains": [ 00:29:01.138 { 00:29:01.138 "dma_device_id": "system", 00:29:01.138 "dma_device_type": 1 00:29:01.138 }, 00:29:01.138 { 00:29:01.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.138 "dma_device_type": 2 00:29:01.138 } 00:29:01.138 ], 00:29:01.138 "driver_specific": {} 00:29:01.138 }' 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:01.138 10:39:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:01.138 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:01.138 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:01.397 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:01.655 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:01.655 "name": "BaseBdev2", 00:29:01.655 "aliases": [ 00:29:01.655 "f6287721-522c-46d8-a23a-70f4810f4b2e" 00:29:01.655 ], 00:29:01.655 "product_name": "Malloc disk", 00:29:01.655 "block_size": 4096, 00:29:01.655 "num_blocks": 8192, 00:29:01.655 "uuid": "f6287721-522c-46d8-a23a-70f4810f4b2e", 00:29:01.655 "md_size": 32, 00:29:01.655 "md_interleave": false, 00:29:01.655 "dif_type": 0, 00:29:01.655 "assigned_rate_limits": { 00:29:01.655 "rw_ios_per_sec": 0, 00:29:01.655 "rw_mbytes_per_sec": 0, 00:29:01.655 "r_mbytes_per_sec": 0, 00:29:01.655 "w_mbytes_per_sec": 0 00:29:01.655 }, 00:29:01.655 "claimed": true, 00:29:01.655 "claim_type": "exclusive_write", 00:29:01.655 "zoned": false, 00:29:01.655 "supported_io_types": { 00:29:01.655 "read": true, 00:29:01.655 "write": true, 00:29:01.655 "unmap": true, 00:29:01.655 "flush": true, 00:29:01.655 "reset": true, 00:29:01.655 "nvme_admin": false, 00:29:01.655 "nvme_io": false, 00:29:01.655 "nvme_io_md": false, 00:29:01.655 "write_zeroes": true, 00:29:01.655 "zcopy": true, 00:29:01.655 "get_zone_info": false, 00:29:01.655 "zone_management": false, 00:29:01.655 "zone_append": false, 00:29:01.655 "compare": false, 00:29:01.655 "compare_and_write": false, 00:29:01.655 "abort": true, 00:29:01.655 "seek_hole": false, 00:29:01.655 "seek_data": false, 00:29:01.655 "copy": true, 00:29:01.655 "nvme_iov_md": false 00:29:01.655 }, 00:29:01.655 "memory_domains": [ 00:29:01.655 { 00:29:01.655 "dma_device_id": "system", 00:29:01.655 "dma_device_type": 1 00:29:01.655 }, 00:29:01.655 { 00:29:01.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.655 "dma_device_type": 2 00:29:01.656 } 00:29:01.656 ], 00:29:01.656 "driver_specific": {} 00:29:01.656 }' 00:29:01.656 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:01.656 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:01.656 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:01.656 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:01.656 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:01.914 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:02.173 [2024-07-26 10:39:14.967646] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.173 10:39:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.173 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:02.432 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.432 "name": "Existed_Raid", 00:29:02.432 "uuid": "621b8199-abd5-4f5f-a8d9-293e4b795650", 00:29:02.432 "strip_size_kb": 0, 00:29:02.432 "state": "online", 00:29:02.432 "raid_level": "raid1", 00:29:02.432 "superblock": true, 00:29:02.432 "num_base_bdevs": 2, 00:29:02.432 "num_base_bdevs_discovered": 1, 00:29:02.432 "num_base_bdevs_operational": 1, 00:29:02.432 "base_bdevs_list": [ 00:29:02.432 { 00:29:02.432 "name": null, 00:29:02.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.432 "is_configured": false, 00:29:02.432 "data_offset": 256, 00:29:02.432 "data_size": 7936 00:29:02.432 }, 00:29:02.432 { 00:29:02.432 "name": "BaseBdev2", 00:29:02.432 "uuid": "f6287721-522c-46d8-a23a-70f4810f4b2e", 00:29:02.432 "is_configured": true, 00:29:02.432 "data_offset": 256, 00:29:02.432 "data_size": 7936 00:29:02.432 } 00:29:02.432 ] 00:29:02.432 }' 00:29:02.432 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.432 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:02.999 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:29:02.999 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:02.999 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:29:02.999 10:39:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.258 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:29:03.258 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:03.258 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:03.517 [2024-07-26 10:39:16.233013] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:03.517 [2024-07-26 10:39:16.233089] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:03.517 [2024-07-26 10:39:16.244082] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:03.517 [2024-07-26 10:39:16.244111] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:03.517 [2024-07-26 10:39:16.244121] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15287d0 name Existed_Raid, state offline 00:29:03.517 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:29:03.517 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:03.517 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.517 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 3521910 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3521910 ']' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 3521910 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3521910 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3521910' 00:29:03.777 killing process with pid 3521910 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 3521910 00:29:03.777 [2024-07-26 10:39:16.537240] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:03.777 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 3521910 00:29:03.777 [2024-07-26 10:39:16.538069] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:04.036 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:29:04.036 00:29:04.036 real 0m9.394s 00:29:04.036 user 0m17.013s 00:29:04.036 sys 0m1.879s 00:29:04.036 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.036 10:39:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:04.036 ************************************ 00:29:04.036 END TEST raid_state_function_test_sb_md_separate 00:29:04.036 ************************************ 00:29:04.036 10:39:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:29:04.036 10:39:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:04.036 10:39:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.036 10:39:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:04.037 ************************************ 00:29:04.037 START TEST raid_superblock_test_md_separate 00:29:04.037 ************************************ 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=3523715 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 3523715 /var/tmp/spdk-raid.sock 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3523715 ']' 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:04.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.037 10:39:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:04.037 [2024-07-26 10:39:16.860081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:29:04.037 [2024-07-26 10:39:16.860135] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523715 ] 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:04.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:04.037 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:04.297 [2024-07-26 10:39:16.993639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.297 [2024-07-26 10:39:17.037650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.297 [2024-07-26 10:39:17.095379] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:04.297 [2024-07-26 10:39:17.095414] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:04.865 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:29:05.125 malloc1 00:29:05.125 10:39:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:05.386 [2024-07-26 10:39:18.206517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:05.386 [2024-07-26 10:39:18.206563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.386 [2024-07-26 10:39:18.206582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f91790 00:29:05.386 [2024-07-26 10:39:18.206593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.386 [2024-07-26 10:39:18.207936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.386 [2024-07-26 10:39:18.207961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:05.386 pt1 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:05.386 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:29:05.645 malloc2 00:29:05.645 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:05.904 [2024-07-26 10:39:18.672855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:05.904 [2024-07-26 10:39:18.672895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.904 [2024-07-26 10:39:18.672912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4c3c0 00:29:05.904 [2024-07-26 10:39:18.672925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.904 [2024-07-26 10:39:18.674318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.904 [2024-07-26 10:39:18.674345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:05.904 pt2 00:29:05.904 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:29:05.904 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:29:05.904 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:29:06.164 [2024-07-26 10:39:18.897473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:06.164 [2024-07-26 10:39:18.898653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:06.164 [2024-07-26 10:39:18.898777] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f47c40 00:29:06.164 [2024-07-26 10:39:18.898789] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:06.164 [2024-07-26 10:39:18.898862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f4ab20 00:29:06.164 [2024-07-26 10:39:18.898963] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f47c40 00:29:06.164 [2024-07-26 10:39:18.898973] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f47c40 00:29:06.164 [2024-07-26 10:39:18.899050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.164 10:39:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.423 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.423 "name": "raid_bdev1", 00:29:06.424 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:06.424 "strip_size_kb": 0, 00:29:06.424 "state": "online", 00:29:06.424 "raid_level": "raid1", 00:29:06.424 "superblock": true, 00:29:06.424 "num_base_bdevs": 2, 00:29:06.424 "num_base_bdevs_discovered": 2, 00:29:06.424 "num_base_bdevs_operational": 2, 00:29:06.424 "base_bdevs_list": [ 00:29:06.424 { 00:29:06.424 "name": "pt1", 00:29:06.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:06.424 "is_configured": true, 00:29:06.424 "data_offset": 256, 00:29:06.424 "data_size": 7936 00:29:06.424 }, 00:29:06.424 { 00:29:06.424 "name": "pt2", 00:29:06.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:06.424 "is_configured": true, 00:29:06.424 "data_offset": 256, 00:29:06.424 "data_size": 7936 00:29:06.424 } 00:29:06.424 ] 00:29:06.424 }' 00:29:06.424 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.424 10:39:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:06.992 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:07.251 [2024-07-26 10:39:19.920387] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:07.251 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:07.251 "name": "raid_bdev1", 00:29:07.251 "aliases": [ 00:29:07.251 "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5" 00:29:07.251 ], 00:29:07.251 "product_name": "Raid Volume", 00:29:07.251 "block_size": 4096, 00:29:07.251 "num_blocks": 7936, 00:29:07.251 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:07.251 "md_size": 32, 00:29:07.251 "md_interleave": false, 00:29:07.251 "dif_type": 0, 00:29:07.251 "assigned_rate_limits": { 00:29:07.251 "rw_ios_per_sec": 0, 00:29:07.251 "rw_mbytes_per_sec": 0, 00:29:07.251 "r_mbytes_per_sec": 0, 00:29:07.251 "w_mbytes_per_sec": 0 00:29:07.251 }, 00:29:07.251 "claimed": false, 00:29:07.251 "zoned": false, 00:29:07.251 "supported_io_types": { 00:29:07.251 "read": true, 00:29:07.251 "write": true, 00:29:07.251 "unmap": false, 00:29:07.251 "flush": false, 00:29:07.251 "reset": true, 00:29:07.251 "nvme_admin": false, 00:29:07.251 "nvme_io": false, 00:29:07.251 "nvme_io_md": false, 00:29:07.251 "write_zeroes": true, 00:29:07.251 "zcopy": false, 00:29:07.251 "get_zone_info": false, 00:29:07.251 "zone_management": false, 00:29:07.251 "zone_append": false, 00:29:07.251 "compare": false, 00:29:07.251 "compare_and_write": false, 00:29:07.251 "abort": false, 00:29:07.251 "seek_hole": false, 00:29:07.251 "seek_data": false, 00:29:07.251 "copy": false, 00:29:07.251 "nvme_iov_md": false 00:29:07.251 }, 00:29:07.251 "memory_domains": [ 00:29:07.251 { 00:29:07.251 "dma_device_id": "system", 00:29:07.251 "dma_device_type": 1 00:29:07.251 }, 00:29:07.251 { 00:29:07.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.251 "dma_device_type": 2 00:29:07.251 }, 00:29:07.251 { 00:29:07.251 "dma_device_id": "system", 00:29:07.251 "dma_device_type": 1 00:29:07.251 }, 00:29:07.251 { 00:29:07.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.251 "dma_device_type": 2 00:29:07.251 } 00:29:07.251 ], 00:29:07.251 "driver_specific": { 00:29:07.251 "raid": { 00:29:07.252 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:07.252 "strip_size_kb": 0, 00:29:07.252 "state": "online", 00:29:07.252 "raid_level": "raid1", 00:29:07.252 "superblock": true, 00:29:07.252 "num_base_bdevs": 2, 00:29:07.252 "num_base_bdevs_discovered": 2, 00:29:07.252 "num_base_bdevs_operational": 2, 00:29:07.252 "base_bdevs_list": [ 00:29:07.252 { 00:29:07.252 "name": "pt1", 00:29:07.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:07.252 "is_configured": true, 00:29:07.252 "data_offset": 256, 00:29:07.252 "data_size": 7936 00:29:07.252 }, 00:29:07.252 { 00:29:07.252 "name": "pt2", 00:29:07.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:07.252 "is_configured": true, 00:29:07.252 "data_offset": 256, 00:29:07.252 "data_size": 7936 00:29:07.252 } 00:29:07.252 ] 00:29:07.252 } 00:29:07.252 } 00:29:07.252 }' 00:29:07.252 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:07.252 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:07.252 pt2' 00:29:07.252 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.252 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:07.252 10:39:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.510 "name": "pt1", 00:29:07.510 "aliases": [ 00:29:07.510 "00000000-0000-0000-0000-000000000001" 00:29:07.510 ], 00:29:07.510 "product_name": "passthru", 00:29:07.510 "block_size": 4096, 00:29:07.510 "num_blocks": 8192, 00:29:07.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:07.510 "md_size": 32, 00:29:07.510 "md_interleave": false, 00:29:07.510 "dif_type": 0, 00:29:07.510 "assigned_rate_limits": { 00:29:07.510 "rw_ios_per_sec": 0, 00:29:07.510 "rw_mbytes_per_sec": 0, 00:29:07.510 "r_mbytes_per_sec": 0, 00:29:07.510 "w_mbytes_per_sec": 0 00:29:07.510 }, 00:29:07.510 "claimed": true, 00:29:07.510 "claim_type": "exclusive_write", 00:29:07.510 "zoned": false, 00:29:07.510 "supported_io_types": { 00:29:07.510 "read": true, 00:29:07.510 "write": true, 00:29:07.510 "unmap": true, 00:29:07.510 "flush": true, 00:29:07.510 "reset": true, 00:29:07.510 "nvme_admin": false, 00:29:07.510 "nvme_io": false, 00:29:07.510 "nvme_io_md": false, 00:29:07.510 "write_zeroes": true, 00:29:07.510 "zcopy": true, 00:29:07.510 "get_zone_info": false, 00:29:07.510 "zone_management": false, 00:29:07.510 "zone_append": false, 00:29:07.510 "compare": false, 00:29:07.510 "compare_and_write": false, 00:29:07.510 "abort": true, 00:29:07.510 "seek_hole": false, 00:29:07.510 "seek_data": false, 00:29:07.510 "copy": true, 00:29:07.510 "nvme_iov_md": false 00:29:07.510 }, 00:29:07.510 "memory_domains": [ 00:29:07.510 { 00:29:07.510 "dma_device_id": "system", 00:29:07.510 "dma_device_type": 1 00:29:07.510 }, 00:29:07.510 { 00:29:07.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.510 "dma_device_type": 2 00:29:07.510 } 00:29:07.510 ], 00:29:07.510 "driver_specific": { 00:29:07.510 "passthru": { 00:29:07.510 "name": "pt1", 00:29:07.510 "base_bdev_name": "malloc1" 00:29:07.510 } 00:29:07.510 } 00:29:07.510 }' 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:07.510 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:07.769 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.029 "name": "pt2", 00:29:08.029 "aliases": [ 00:29:08.029 "00000000-0000-0000-0000-000000000002" 00:29:08.029 ], 00:29:08.029 "product_name": "passthru", 00:29:08.029 "block_size": 4096, 00:29:08.029 "num_blocks": 8192, 00:29:08.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:08.029 "md_size": 32, 00:29:08.029 "md_interleave": false, 00:29:08.029 "dif_type": 0, 00:29:08.029 "assigned_rate_limits": { 00:29:08.029 "rw_ios_per_sec": 0, 00:29:08.029 "rw_mbytes_per_sec": 0, 00:29:08.029 "r_mbytes_per_sec": 0, 00:29:08.029 "w_mbytes_per_sec": 0 00:29:08.029 }, 00:29:08.029 "claimed": true, 00:29:08.029 "claim_type": "exclusive_write", 00:29:08.029 "zoned": false, 00:29:08.029 "supported_io_types": { 00:29:08.029 "read": true, 00:29:08.029 "write": true, 00:29:08.029 "unmap": true, 00:29:08.029 "flush": true, 00:29:08.029 "reset": true, 00:29:08.029 "nvme_admin": false, 00:29:08.029 "nvme_io": false, 00:29:08.029 "nvme_io_md": false, 00:29:08.029 "write_zeroes": true, 00:29:08.029 "zcopy": true, 00:29:08.029 "get_zone_info": false, 00:29:08.029 "zone_management": false, 00:29:08.029 "zone_append": false, 00:29:08.029 "compare": false, 00:29:08.029 "compare_and_write": false, 00:29:08.029 "abort": true, 00:29:08.029 "seek_hole": false, 00:29:08.029 "seek_data": false, 00:29:08.029 "copy": true, 00:29:08.029 "nvme_iov_md": false 00:29:08.029 }, 00:29:08.029 "memory_domains": [ 00:29:08.029 { 00:29:08.029 "dma_device_id": "system", 00:29:08.029 "dma_device_type": 1 00:29:08.029 }, 00:29:08.029 { 00:29:08.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.029 "dma_device_type": 2 00:29:08.029 } 00:29:08.029 ], 00:29:08.029 "driver_specific": { 00:29:08.029 "passthru": { 00:29:08.029 "name": "pt2", 00:29:08.029 "base_bdev_name": "malloc2" 00:29:08.029 } 00:29:08.029 } 00:29:08.029 }' 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.029 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.288 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:08.288 10:39:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:08.288 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:29:08.547 [2024-07-26 10:39:21.352167] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:08.547 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 00:29:08.547 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 ']' 00:29:08.547 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:08.806 [2024-07-26 10:39:21.580508] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:08.806 [2024-07-26 10:39:21.580527] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:08.806 [2024-07-26 10:39:21.580577] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.806 [2024-07-26 10:39:21.580625] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:08.806 [2024-07-26 10:39:21.580636] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f47c40 name raid_bdev1, state offline 00:29:08.806 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.807 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:29:09.065 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:29:09.065 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:29:09.065 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:09.065 10:39:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:09.324 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:09.324 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:09.583 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:09.583 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:09.842 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:29:09.842 [2024-07-26 10:39:22.727489] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:09.842 [2024-07-26 10:39:22.728721] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:09.842 [2024-07-26 10:39:22.728771] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:09.842 [2024-07-26 10:39:22.728808] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:09.842 [2024-07-26 10:39:22.728825] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:09.842 [2024-07-26 10:39:22.728834] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f4d330 name raid_bdev1, state configuring 00:29:09.842 request: 00:29:09.842 { 00:29:09.842 "name": "raid_bdev1", 00:29:09.842 "raid_level": "raid1", 00:29:09.842 "base_bdevs": [ 00:29:09.842 "malloc1", 00:29:09.842 "malloc2" 00:29:09.842 ], 00:29:09.842 "superblock": false, 00:29:09.842 "method": "bdev_raid_create", 00:29:09.842 "req_id": 1 00:29:09.842 } 00:29:09.842 Got JSON-RPC error response 00:29:09.842 response: 00:29:09.842 { 00:29:09.842 "code": -17, 00:29:09.842 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:09.842 } 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:29:10.101 10:39:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:10.360 [2024-07-26 10:39:23.172615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:10.360 [2024-07-26 10:39:23.172659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:10.360 [2024-07-26 10:39:23.172677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f919c0 00:29:10.360 [2024-07-26 10:39:23.172689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:10.360 [2024-07-26 10:39:23.174014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:10.360 [2024-07-26 10:39:23.174045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:10.360 [2024-07-26 10:39:23.174089] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:10.360 [2024-07-26 10:39:23.174111] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:10.360 pt1 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.360 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.619 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:10.619 "name": "raid_bdev1", 00:29:10.619 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:10.619 "strip_size_kb": 0, 00:29:10.619 "state": "configuring", 00:29:10.619 "raid_level": "raid1", 00:29:10.619 "superblock": true, 00:29:10.619 "num_base_bdevs": 2, 00:29:10.619 "num_base_bdevs_discovered": 1, 00:29:10.619 "num_base_bdevs_operational": 2, 00:29:10.619 "base_bdevs_list": [ 00:29:10.619 { 00:29:10.619 "name": "pt1", 00:29:10.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:10.619 "is_configured": true, 00:29:10.619 "data_offset": 256, 00:29:10.619 "data_size": 7936 00:29:10.619 }, 00:29:10.619 { 00:29:10.619 "name": null, 00:29:10.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:10.619 "is_configured": false, 00:29:10.619 "data_offset": 256, 00:29:10.619 "data_size": 7936 00:29:10.619 } 00:29:10.619 ] 00:29:10.619 }' 00:29:10.619 10:39:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:10.619 10:39:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:11.186 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:29:11.186 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:29:11.186 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:11.186 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:11.443 [2024-07-26 10:39:24.215360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:11.443 [2024-07-26 10:39:24.215406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.443 [2024-07-26 10:39:24.215424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f490a0 00:29:11.443 [2024-07-26 10:39:24.215437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.443 [2024-07-26 10:39:24.215609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.443 [2024-07-26 10:39:24.215624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:11.443 [2024-07-26 10:39:24.215663] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:11.443 [2024-07-26 10:39:24.215679] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:11.443 [2024-07-26 10:39:24.215768] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f49910 00:29:11.443 [2024-07-26 10:39:24.215778] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:11.443 [2024-07-26 10:39:24.215830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f47f30 00:29:11.443 [2024-07-26 10:39:24.215923] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f49910 00:29:11.443 [2024-07-26 10:39:24.215932] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f49910 00:29:11.443 [2024-07-26 10:39:24.215995] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.443 pt2 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.443 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.702 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.702 "name": "raid_bdev1", 00:29:11.702 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:11.702 "strip_size_kb": 0, 00:29:11.702 "state": "online", 00:29:11.702 "raid_level": "raid1", 00:29:11.702 "superblock": true, 00:29:11.702 "num_base_bdevs": 2, 00:29:11.702 "num_base_bdevs_discovered": 2, 00:29:11.702 "num_base_bdevs_operational": 2, 00:29:11.702 "base_bdevs_list": [ 00:29:11.702 { 00:29:11.702 "name": "pt1", 00:29:11.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:11.702 "is_configured": true, 00:29:11.702 "data_offset": 256, 00:29:11.702 "data_size": 7936 00:29:11.702 }, 00:29:11.702 { 00:29:11.702 "name": "pt2", 00:29:11.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:11.702 "is_configured": true, 00:29:11.702 "data_offset": 256, 00:29:11.702 "data_size": 7936 00:29:11.702 } 00:29:11.702 ] 00:29:11.702 }' 00:29:11.702 10:39:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.702 10:39:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:12.269 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:12.528 [2024-07-26 10:39:25.254397] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:12.528 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:12.528 "name": "raid_bdev1", 00:29:12.528 "aliases": [ 00:29:12.528 "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5" 00:29:12.528 ], 00:29:12.528 "product_name": "Raid Volume", 00:29:12.528 "block_size": 4096, 00:29:12.528 "num_blocks": 7936, 00:29:12.528 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:12.528 "md_size": 32, 00:29:12.528 "md_interleave": false, 00:29:12.528 "dif_type": 0, 00:29:12.528 "assigned_rate_limits": { 00:29:12.528 "rw_ios_per_sec": 0, 00:29:12.528 "rw_mbytes_per_sec": 0, 00:29:12.528 "r_mbytes_per_sec": 0, 00:29:12.528 "w_mbytes_per_sec": 0 00:29:12.528 }, 00:29:12.528 "claimed": false, 00:29:12.528 "zoned": false, 00:29:12.529 "supported_io_types": { 00:29:12.529 "read": true, 00:29:12.529 "write": true, 00:29:12.529 "unmap": false, 00:29:12.529 "flush": false, 00:29:12.529 "reset": true, 00:29:12.529 "nvme_admin": false, 00:29:12.529 "nvme_io": false, 00:29:12.529 "nvme_io_md": false, 00:29:12.529 "write_zeroes": true, 00:29:12.529 "zcopy": false, 00:29:12.529 "get_zone_info": false, 00:29:12.529 "zone_management": false, 00:29:12.529 "zone_append": false, 00:29:12.529 "compare": false, 00:29:12.529 "compare_and_write": false, 00:29:12.529 "abort": false, 00:29:12.529 "seek_hole": false, 00:29:12.529 "seek_data": false, 00:29:12.529 "copy": false, 00:29:12.529 "nvme_iov_md": false 00:29:12.529 }, 00:29:12.529 "memory_domains": [ 00:29:12.529 { 00:29:12.529 "dma_device_id": "system", 00:29:12.529 "dma_device_type": 1 00:29:12.529 }, 00:29:12.529 { 00:29:12.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.529 "dma_device_type": 2 00:29:12.529 }, 00:29:12.529 { 00:29:12.529 "dma_device_id": "system", 00:29:12.529 "dma_device_type": 1 00:29:12.529 }, 00:29:12.529 { 00:29:12.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.529 "dma_device_type": 2 00:29:12.529 } 00:29:12.529 ], 00:29:12.529 "driver_specific": { 00:29:12.529 "raid": { 00:29:12.529 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:12.529 "strip_size_kb": 0, 00:29:12.529 "state": "online", 00:29:12.529 "raid_level": "raid1", 00:29:12.529 "superblock": true, 00:29:12.529 "num_base_bdevs": 2, 00:29:12.529 "num_base_bdevs_discovered": 2, 00:29:12.529 "num_base_bdevs_operational": 2, 00:29:12.529 "base_bdevs_list": [ 00:29:12.529 { 00:29:12.529 "name": "pt1", 00:29:12.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.529 "is_configured": true, 00:29:12.529 "data_offset": 256, 00:29:12.529 "data_size": 7936 00:29:12.529 }, 00:29:12.529 { 00:29:12.529 "name": "pt2", 00:29:12.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:12.529 "is_configured": true, 00:29:12.529 "data_offset": 256, 00:29:12.529 "data_size": 7936 00:29:12.529 } 00:29:12.529 ] 00:29:12.529 } 00:29:12.529 } 00:29:12.529 }' 00:29:12.529 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:12.529 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:12.529 pt2' 00:29:12.529 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:12.529 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:12.529 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:12.788 "name": "pt1", 00:29:12.788 "aliases": [ 00:29:12.788 "00000000-0000-0000-0000-000000000001" 00:29:12.788 ], 00:29:12.788 "product_name": "passthru", 00:29:12.788 "block_size": 4096, 00:29:12.788 "num_blocks": 8192, 00:29:12.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:12.788 "md_size": 32, 00:29:12.788 "md_interleave": false, 00:29:12.788 "dif_type": 0, 00:29:12.788 "assigned_rate_limits": { 00:29:12.788 "rw_ios_per_sec": 0, 00:29:12.788 "rw_mbytes_per_sec": 0, 00:29:12.788 "r_mbytes_per_sec": 0, 00:29:12.788 "w_mbytes_per_sec": 0 00:29:12.788 }, 00:29:12.788 "claimed": true, 00:29:12.788 "claim_type": "exclusive_write", 00:29:12.788 "zoned": false, 00:29:12.788 "supported_io_types": { 00:29:12.788 "read": true, 00:29:12.788 "write": true, 00:29:12.788 "unmap": true, 00:29:12.788 "flush": true, 00:29:12.788 "reset": true, 00:29:12.788 "nvme_admin": false, 00:29:12.788 "nvme_io": false, 00:29:12.788 "nvme_io_md": false, 00:29:12.788 "write_zeroes": true, 00:29:12.788 "zcopy": true, 00:29:12.788 "get_zone_info": false, 00:29:12.788 "zone_management": false, 00:29:12.788 "zone_append": false, 00:29:12.788 "compare": false, 00:29:12.788 "compare_and_write": false, 00:29:12.788 "abort": true, 00:29:12.788 "seek_hole": false, 00:29:12.788 "seek_data": false, 00:29:12.788 "copy": true, 00:29:12.788 "nvme_iov_md": false 00:29:12.788 }, 00:29:12.788 "memory_domains": [ 00:29:12.788 { 00:29:12.788 "dma_device_id": "system", 00:29:12.788 "dma_device_type": 1 00:29:12.788 }, 00:29:12.788 { 00:29:12.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.788 "dma_device_type": 2 00:29:12.788 } 00:29:12.788 ], 00:29:12.788 "driver_specific": { 00:29:12.788 "passthru": { 00:29:12.788 "name": "pt1", 00:29:12.788 "base_bdev_name": "malloc1" 00:29:12.788 } 00:29:12.788 } 00:29:12.788 }' 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:12.788 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:13.047 10:39:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:13.306 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:13.306 "name": "pt2", 00:29:13.306 "aliases": [ 00:29:13.306 "00000000-0000-0000-0000-000000000002" 00:29:13.306 ], 00:29:13.306 "product_name": "passthru", 00:29:13.306 "block_size": 4096, 00:29:13.306 "num_blocks": 8192, 00:29:13.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:13.306 "md_size": 32, 00:29:13.306 "md_interleave": false, 00:29:13.306 "dif_type": 0, 00:29:13.306 "assigned_rate_limits": { 00:29:13.306 "rw_ios_per_sec": 0, 00:29:13.306 "rw_mbytes_per_sec": 0, 00:29:13.306 "r_mbytes_per_sec": 0, 00:29:13.306 "w_mbytes_per_sec": 0 00:29:13.306 }, 00:29:13.306 "claimed": true, 00:29:13.306 "claim_type": "exclusive_write", 00:29:13.306 "zoned": false, 00:29:13.306 "supported_io_types": { 00:29:13.306 "read": true, 00:29:13.306 "write": true, 00:29:13.306 "unmap": true, 00:29:13.306 "flush": true, 00:29:13.306 "reset": true, 00:29:13.306 "nvme_admin": false, 00:29:13.306 "nvme_io": false, 00:29:13.306 "nvme_io_md": false, 00:29:13.306 "write_zeroes": true, 00:29:13.306 "zcopy": true, 00:29:13.306 "get_zone_info": false, 00:29:13.306 "zone_management": false, 00:29:13.306 "zone_append": false, 00:29:13.306 "compare": false, 00:29:13.306 "compare_and_write": false, 00:29:13.306 "abort": true, 00:29:13.306 "seek_hole": false, 00:29:13.306 "seek_data": false, 00:29:13.306 "copy": true, 00:29:13.306 "nvme_iov_md": false 00:29:13.306 }, 00:29:13.306 "memory_domains": [ 00:29:13.306 { 00:29:13.306 "dma_device_id": "system", 00:29:13.306 "dma_device_type": 1 00:29:13.306 }, 00:29:13.306 { 00:29:13.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.306 "dma_device_type": 2 00:29:13.306 } 00:29:13.306 ], 00:29:13.306 "driver_specific": { 00:29:13.306 "passthru": { 00:29:13.306 "name": "pt2", 00:29:13.306 "base_bdev_name": "malloc2" 00:29:13.306 } 00:29:13.306 } 00:29:13.306 }' 00:29:13.306 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.306 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.565 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:29:13.824 [2024-07-26 10:39:26.678122] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 '!=' e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 ']' 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:29:13.824 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:14.083 [2024-07-26 10:39:26.906528] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.083 10:39:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.343 10:39:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.343 "name": "raid_bdev1", 00:29:14.343 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:14.343 "strip_size_kb": 0, 00:29:14.343 "state": "online", 00:29:14.343 "raid_level": "raid1", 00:29:14.343 "superblock": true, 00:29:14.343 "num_base_bdevs": 2, 00:29:14.343 "num_base_bdevs_discovered": 1, 00:29:14.343 "num_base_bdevs_operational": 1, 00:29:14.343 "base_bdevs_list": [ 00:29:14.343 { 00:29:14.343 "name": null, 00:29:14.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.343 "is_configured": false, 00:29:14.343 "data_offset": 256, 00:29:14.343 "data_size": 7936 00:29:14.343 }, 00:29:14.343 { 00:29:14.343 "name": "pt2", 00:29:14.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:14.343 "is_configured": true, 00:29:14.343 "data_offset": 256, 00:29:14.343 "data_size": 7936 00:29:14.343 } 00:29:14.343 ] 00:29:14.343 }' 00:29:14.343 10:39:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.343 10:39:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:14.911 10:39:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:15.170 [2024-07-26 10:39:27.945250] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:15.170 [2024-07-26 10:39:27.945280] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:15.171 [2024-07-26 10:39:27.945329] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:15.171 [2024-07-26 10:39:27.945370] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:15.171 [2024-07-26 10:39:27.945381] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f49910 name raid_bdev1, state offline 00:29:15.171 10:39:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.171 10:39:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:29:15.430 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:29:15.430 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:29:15.430 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:29:15.430 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:15.430 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:29:15.689 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:15.948 [2024-07-26 10:39:28.606958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:15.948 [2024-07-26 10:39:28.607004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.949 [2024-07-26 10:39:28.607021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f4d040 00:29:15.949 [2024-07-26 10:39:28.607033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.949 [2024-07-26 10:39:28.608388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.949 [2024-07-26 10:39:28.608414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:15.949 [2024-07-26 10:39:28.608458] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:15.949 [2024-07-26 10:39:28.608480] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:15.949 [2024-07-26 10:39:28.608552] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f4a4a0 00:29:15.949 [2024-07-26 10:39:28.608562] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:15.949 [2024-07-26 10:39:28.608614] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df4c70 00:29:15.949 [2024-07-26 10:39:28.608704] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f4a4a0 00:29:15.949 [2024-07-26 10:39:28.608713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f4a4a0 00:29:15.949 [2024-07-26 10:39:28.608774] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.949 pt2 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.949 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.208 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.208 "name": "raid_bdev1", 00:29:16.208 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:16.208 "strip_size_kb": 0, 00:29:16.208 "state": "online", 00:29:16.208 "raid_level": "raid1", 00:29:16.208 "superblock": true, 00:29:16.208 "num_base_bdevs": 2, 00:29:16.208 "num_base_bdevs_discovered": 1, 00:29:16.208 "num_base_bdevs_operational": 1, 00:29:16.208 "base_bdevs_list": [ 00:29:16.208 { 00:29:16.208 "name": null, 00:29:16.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.208 "is_configured": false, 00:29:16.208 "data_offset": 256, 00:29:16.208 "data_size": 7936 00:29:16.208 }, 00:29:16.208 { 00:29:16.208 "name": "pt2", 00:29:16.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.208 "is_configured": true, 00:29:16.208 "data_offset": 256, 00:29:16.208 "data_size": 7936 00:29:16.208 } 00:29:16.208 ] 00:29:16.208 }' 00:29:16.208 10:39:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.208 10:39:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:16.776 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:16.776 [2024-07-26 10:39:29.629634] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:16.776 [2024-07-26 10:39:29.629657] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:16.776 [2024-07-26 10:39:29.629705] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:16.776 [2024-07-26 10:39:29.629744] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:16.776 [2024-07-26 10:39:29.629754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f4a4a0 name raid_bdev1, state offline 00:29:16.777 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.777 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:29:17.035 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:29:17.035 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:29:17.035 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:29:17.035 10:39:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:17.294 [2024-07-26 10:39:30.082815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:17.294 [2024-07-26 10:39:30.082867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.294 [2024-07-26 10:39:30.082884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f49a30 00:29:17.294 [2024-07-26 10:39:30.082896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.294 [2024-07-26 10:39:30.084252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.294 [2024-07-26 10:39:30.084283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:17.294 [2024-07-26 10:39:30.084325] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:17.294 [2024-07-26 10:39:30.084347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:17.294 [2024-07-26 10:39:30.084429] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:17.294 [2024-07-26 10:39:30.084441] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.294 [2024-07-26 10:39:30.084456] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1edb4e0 name raid_bdev1, state configuring 00:29:17.294 [2024-07-26 10:39:30.084476] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:17.294 [2024-07-26 10:39:30.084526] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1df44d0 00:29:17.294 [2024-07-26 10:39:30.084535] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:17.294 [2024-07-26 10:39:30.084587] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f4aa70 00:29:17.294 [2024-07-26 10:39:30.084674] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1df44d0 00:29:17.294 [2024-07-26 10:39:30.084683] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1df44d0 00:29:17.294 [2024-07-26 10:39:30.084749] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.294 pt1 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.294 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.552 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.552 "name": "raid_bdev1", 00:29:17.552 "uuid": "e9280560-cf06-4f47-aa9f-ddbd7a61e6e5", 00:29:17.552 "strip_size_kb": 0, 00:29:17.552 "state": "online", 00:29:17.552 "raid_level": "raid1", 00:29:17.552 "superblock": true, 00:29:17.552 "num_base_bdevs": 2, 00:29:17.552 "num_base_bdevs_discovered": 1, 00:29:17.552 "num_base_bdevs_operational": 1, 00:29:17.552 "base_bdevs_list": [ 00:29:17.552 { 00:29:17.552 "name": null, 00:29:17.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.552 "is_configured": false, 00:29:17.552 "data_offset": 256, 00:29:17.552 "data_size": 7936 00:29:17.552 }, 00:29:17.552 { 00:29:17.552 "name": "pt2", 00:29:17.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:17.552 "is_configured": true, 00:29:17.552 "data_offset": 256, 00:29:17.552 "data_size": 7936 00:29:17.552 } 00:29:17.552 ] 00:29:17.552 }' 00:29:17.552 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.552 10:39:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:18.119 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:18.119 10:39:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:18.379 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:29:18.379 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:18.379 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:29:18.638 [2024-07-26 10:39:31.370408] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 '!=' e9280560-cf06-4f47-aa9f-ddbd7a61e6e5 ']' 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 3523715 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3523715 ']' 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 3523715 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3523715 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3523715' 00:29:18.638 killing process with pid 3523715 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 3523715 00:29:18.638 [2024-07-26 10:39:31.448454] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:18.638 [2024-07-26 10:39:31.448501] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:18.638 [2024-07-26 10:39:31.448542] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:18.638 [2024-07-26 10:39:31.448552] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1df44d0 name raid_bdev1, state offline 00:29:18.638 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 3523715 00:29:18.638 [2024-07-26 10:39:31.467091] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:18.897 10:39:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:29:18.897 00:29:18.897 real 0m14.840s 00:29:18.897 user 0m26.905s 00:29:18.897 sys 0m2.785s 00:29:18.897 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.897 10:39:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:18.897 ************************************ 00:29:18.897 END TEST raid_superblock_test_md_separate 00:29:18.897 ************************************ 00:29:18.897 10:39:31 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:29:18.897 10:39:31 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:29:18.897 10:39:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:18.897 10:39:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.897 10:39:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:18.897 ************************************ 00:29:18.897 START TEST raid_rebuild_test_sb_md_separate 00:29:18.897 ************************************ 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=3526416 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 3526416 /var/tmp/spdk-raid.sock 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3526416 ']' 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:18.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.897 10:39:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:18.897 [2024-07-26 10:39:31.790185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:29:18.897 [2024-07-26 10:39:31.790241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526416 ] 00:29:18.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:18.897 Zero copy mechanism will not be used. 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:19.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.156 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:19.156 [2024-07-26 10:39:31.915811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.156 [2024-07-26 10:39:31.960348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.156 [2024-07-26 10:39:32.014623] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:19.156 [2024-07-26 10:39:32.014643] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.127 10:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.127 10:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:29:20.127 10:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:20.127 10:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:29:20.127 BaseBdev1_malloc 00:29:20.127 10:39:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:20.385 [2024-07-26 10:39:33.128440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:20.385 [2024-07-26 10:39:33.128482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.385 [2024-07-26 10:39:33.128505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2299890 00:29:20.385 [2024-07-26 10:39:33.128517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.385 [2024-07-26 10:39:33.129821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.385 [2024-07-26 10:39:33.129846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:20.385 BaseBdev1 00:29:20.385 10:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:20.385 10:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:29:20.645 BaseBdev2_malloc 00:29:20.645 10:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:20.904 [2024-07-26 10:39:33.582726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:20.904 [2024-07-26 10:39:33.582766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.904 [2024-07-26 10:39:33.582784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2254340 00:29:20.904 [2024-07-26 10:39:33.582795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.904 [2024-07-26 10:39:33.584170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.904 [2024-07-26 10:39:33.584195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:20.904 BaseBdev2 00:29:20.904 10:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:29:21.161 spare_malloc 00:29:21.161 10:39:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:21.161 spare_delay 00:29:21.418 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:21.418 [2024-07-26 10:39:34.289522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:21.418 [2024-07-26 10:39:34.289562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.418 [2024-07-26 10:39:34.289580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2250fb0 00:29:21.418 [2024-07-26 10:39:34.289592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.418 [2024-07-26 10:39:34.290825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.418 [2024-07-26 10:39:34.290850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:21.418 spare 00:29:21.418 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:21.676 [2024-07-26 10:39:34.510122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:21.676 [2024-07-26 10:39:34.511250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:21.676 [2024-07-26 10:39:34.511379] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2252730 00:29:21.676 [2024-07-26 10:39:34.511390] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:21.676 [2024-07-26 10:39:34.511460] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e4b30 00:29:21.676 [2024-07-26 10:39:34.511557] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2252730 00:29:21.676 [2024-07-26 10:39:34.511565] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2252730 00:29:21.676 [2024-07-26 10:39:34.511638] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.676 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.934 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.934 "name": "raid_bdev1", 00:29:21.934 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:21.934 "strip_size_kb": 0, 00:29:21.934 "state": "online", 00:29:21.934 "raid_level": "raid1", 00:29:21.934 "superblock": true, 00:29:21.934 "num_base_bdevs": 2, 00:29:21.934 "num_base_bdevs_discovered": 2, 00:29:21.934 "num_base_bdevs_operational": 2, 00:29:21.934 "base_bdevs_list": [ 00:29:21.934 { 00:29:21.934 "name": "BaseBdev1", 00:29:21.934 "uuid": "c7c61f49-1bae-5946-8b81-aaa73c40e3e4", 00:29:21.934 "is_configured": true, 00:29:21.934 "data_offset": 256, 00:29:21.934 "data_size": 7936 00:29:21.934 }, 00:29:21.934 { 00:29:21.934 "name": "BaseBdev2", 00:29:21.934 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:21.934 "is_configured": true, 00:29:21.934 "data_offset": 256, 00:29:21.934 "data_size": 7936 00:29:21.934 } 00:29:21.934 ] 00:29:21.934 }' 00:29:21.934 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.934 10:39:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:22.499 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:22.500 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:22.758 [2024-07-26 10:39:35.516981] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:22.758 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:29:22.758 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:22.758 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.016 10:39:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:23.274 [2024-07-26 10:39:35.982010] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e48c0 00:29:23.274 /dev/nbd0 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.274 1+0 records in 00:29:23.274 1+0 records out 00:29:23.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257947 s, 15.9 MB/s 00:29:23.274 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:23.275 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:29:24.208 7936+0 records in 00:29:24.208 7936+0 records out 00:29:24.208 32505856 bytes (33 MB, 31 MiB) copied, 0.718256 s, 45.3 MB/s 00:29:24.208 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:24.208 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:24.208 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:24.208 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.209 [2024-07-26 10:39:36.961932] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.209 10:39:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:24.467 [2024-07-26 10:39:37.174535] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.467 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.725 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:24.725 "name": "raid_bdev1", 00:29:24.725 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:24.725 "strip_size_kb": 0, 00:29:24.725 "state": "online", 00:29:24.725 "raid_level": "raid1", 00:29:24.725 "superblock": true, 00:29:24.725 "num_base_bdevs": 2, 00:29:24.725 "num_base_bdevs_discovered": 1, 00:29:24.725 "num_base_bdevs_operational": 1, 00:29:24.725 "base_bdevs_list": [ 00:29:24.725 { 00:29:24.725 "name": null, 00:29:24.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.725 "is_configured": false, 00:29:24.725 "data_offset": 256, 00:29:24.725 "data_size": 7936 00:29:24.725 }, 00:29:24.725 { 00:29:24.725 "name": "BaseBdev2", 00:29:24.725 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:24.725 "is_configured": true, 00:29:24.725 "data_offset": 256, 00:29:24.725 "data_size": 7936 00:29:24.725 } 00:29:24.725 ] 00:29:24.725 }' 00:29:24.725 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:24.725 10:39:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:25.291 10:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:25.549 [2024-07-26 10:39:38.221302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:25.549 [2024-07-26 10:39:38.223429] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e35e0 00:29:25.549 [2024-07-26 10:39:38.225429] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:25.550 10:39:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:26.484 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.484 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.484 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.484 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.484 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.485 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.485 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.743 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.743 "name": "raid_bdev1", 00:29:26.743 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:26.743 "strip_size_kb": 0, 00:29:26.743 "state": "online", 00:29:26.743 "raid_level": "raid1", 00:29:26.743 "superblock": true, 00:29:26.743 "num_base_bdevs": 2, 00:29:26.743 "num_base_bdevs_discovered": 2, 00:29:26.743 "num_base_bdevs_operational": 2, 00:29:26.743 "process": { 00:29:26.743 "type": "rebuild", 00:29:26.743 "target": "spare", 00:29:26.743 "progress": { 00:29:26.743 "blocks": 3072, 00:29:26.743 "percent": 38 00:29:26.743 } 00:29:26.743 }, 00:29:26.743 "base_bdevs_list": [ 00:29:26.743 { 00:29:26.743 "name": "spare", 00:29:26.743 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:26.743 "is_configured": true, 00:29:26.743 "data_offset": 256, 00:29:26.743 "data_size": 7936 00:29:26.743 }, 00:29:26.743 { 00:29:26.743 "name": "BaseBdev2", 00:29:26.743 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:26.743 "is_configured": true, 00:29:26.743 "data_offset": 256, 00:29:26.743 "data_size": 7936 00:29:26.743 } 00:29:26.743 ] 00:29:26.743 }' 00:29:26.743 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.744 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.744 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.744 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.744 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:27.002 [2024-07-26 10:39:39.766494] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:27.002 [2024-07-26 10:39:39.837281] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:27.002 [2024-07-26 10:39:39.837328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.002 [2024-07-26 10:39:39.837348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:27.002 [2024-07-26 10:39:39.837356] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.002 10:39:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.261 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:27.261 "name": "raid_bdev1", 00:29:27.261 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:27.261 "strip_size_kb": 0, 00:29:27.261 "state": "online", 00:29:27.261 "raid_level": "raid1", 00:29:27.261 "superblock": true, 00:29:27.261 "num_base_bdevs": 2, 00:29:27.261 "num_base_bdevs_discovered": 1, 00:29:27.261 "num_base_bdevs_operational": 1, 00:29:27.261 "base_bdevs_list": [ 00:29:27.261 { 00:29:27.261 "name": null, 00:29:27.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.261 "is_configured": false, 00:29:27.261 "data_offset": 256, 00:29:27.261 "data_size": 7936 00:29:27.261 }, 00:29:27.261 { 00:29:27.261 "name": "BaseBdev2", 00:29:27.261 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:27.261 "is_configured": true, 00:29:27.261 "data_offset": 256, 00:29:27.261 "data_size": 7936 00:29:27.261 } 00:29:27.261 ] 00:29:27.261 }' 00:29:27.261 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:27.261 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.831 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.089 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.089 "name": "raid_bdev1", 00:29:28.089 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:28.089 "strip_size_kb": 0, 00:29:28.089 "state": "online", 00:29:28.089 "raid_level": "raid1", 00:29:28.089 "superblock": true, 00:29:28.089 "num_base_bdevs": 2, 00:29:28.089 "num_base_bdevs_discovered": 1, 00:29:28.089 "num_base_bdevs_operational": 1, 00:29:28.089 "base_bdevs_list": [ 00:29:28.089 { 00:29:28.089 "name": null, 00:29:28.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.089 "is_configured": false, 00:29:28.089 "data_offset": 256, 00:29:28.089 "data_size": 7936 00:29:28.089 }, 00:29:28.089 { 00:29:28.089 "name": "BaseBdev2", 00:29:28.089 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:28.089 "is_configured": true, 00:29:28.089 "data_offset": 256, 00:29:28.089 "data_size": 7936 00:29:28.089 } 00:29:28.089 ] 00:29:28.089 }' 00:29:28.089 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.089 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:28.089 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.347 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:28.347 10:39:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:28.347 [2024-07-26 10:39:41.199724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:28.347 [2024-07-26 10:39:41.201891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e35e0 00:29:28.347 [2024-07-26 10:39:41.203226] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:28.347 10:39:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.721 "name": "raid_bdev1", 00:29:29.721 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:29.721 "strip_size_kb": 0, 00:29:29.721 "state": "online", 00:29:29.721 "raid_level": "raid1", 00:29:29.721 "superblock": true, 00:29:29.721 "num_base_bdevs": 2, 00:29:29.721 "num_base_bdevs_discovered": 2, 00:29:29.721 "num_base_bdevs_operational": 2, 00:29:29.721 "process": { 00:29:29.721 "type": "rebuild", 00:29:29.721 "target": "spare", 00:29:29.721 "progress": { 00:29:29.721 "blocks": 3072, 00:29:29.721 "percent": 38 00:29:29.721 } 00:29:29.721 }, 00:29:29.721 "base_bdevs_list": [ 00:29:29.721 { 00:29:29.721 "name": "spare", 00:29:29.721 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:29.721 "is_configured": true, 00:29:29.721 "data_offset": 256, 00:29:29.721 "data_size": 7936 00:29:29.721 }, 00:29:29.721 { 00:29:29.721 "name": "BaseBdev2", 00:29:29.721 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:29.721 "is_configured": true, 00:29:29.721 "data_offset": 256, 00:29:29.721 "data_size": 7936 00:29:29.721 } 00:29:29.721 ] 00:29:29.721 }' 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.721 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:29:29.722 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1026 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.722 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.980 "name": "raid_bdev1", 00:29:29.980 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:29.980 "strip_size_kb": 0, 00:29:29.980 "state": "online", 00:29:29.980 "raid_level": "raid1", 00:29:29.980 "superblock": true, 00:29:29.980 "num_base_bdevs": 2, 00:29:29.980 "num_base_bdevs_discovered": 2, 00:29:29.980 "num_base_bdevs_operational": 2, 00:29:29.980 "process": { 00:29:29.980 "type": "rebuild", 00:29:29.980 "target": "spare", 00:29:29.980 "progress": { 00:29:29.980 "blocks": 3840, 00:29:29.980 "percent": 48 00:29:29.980 } 00:29:29.980 }, 00:29:29.980 "base_bdevs_list": [ 00:29:29.980 { 00:29:29.980 "name": "spare", 00:29:29.980 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:29.980 "is_configured": true, 00:29:29.980 "data_offset": 256, 00:29:29.980 "data_size": 7936 00:29:29.980 }, 00:29:29.980 { 00:29:29.980 "name": "BaseBdev2", 00:29:29.980 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:29.980 "is_configured": true, 00:29:29.980 "data_offset": 256, 00:29:29.980 "data_size": 7936 00:29:29.980 } 00:29:29.980 ] 00:29:29.980 }' 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.980 10:39:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.354 10:39:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.354 "name": "raid_bdev1", 00:29:31.354 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:31.354 "strip_size_kb": 0, 00:29:31.354 "state": "online", 00:29:31.354 "raid_level": "raid1", 00:29:31.354 "superblock": true, 00:29:31.354 "num_base_bdevs": 2, 00:29:31.354 "num_base_bdevs_discovered": 2, 00:29:31.354 "num_base_bdevs_operational": 2, 00:29:31.354 "process": { 00:29:31.354 "type": "rebuild", 00:29:31.354 "target": "spare", 00:29:31.354 "progress": { 00:29:31.354 "blocks": 7168, 00:29:31.354 "percent": 90 00:29:31.354 } 00:29:31.354 }, 00:29:31.354 "base_bdevs_list": [ 00:29:31.354 { 00:29:31.354 "name": "spare", 00:29:31.354 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:31.354 "is_configured": true, 00:29:31.354 "data_offset": 256, 00:29:31.354 "data_size": 7936 00:29:31.354 }, 00:29:31.354 { 00:29:31.354 "name": "BaseBdev2", 00:29:31.354 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:31.354 "is_configured": true, 00:29:31.354 "data_offset": 256, 00:29:31.354 "data_size": 7936 00:29:31.354 } 00:29:31.354 ] 00:29:31.354 }' 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:31.354 10:39:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:31.611 [2024-07-26 10:39:44.325924] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:31.611 [2024-07-26 10:39:44.325975] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:31.611 [2024-07-26 10:39:44.326051] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.542 "name": "raid_bdev1", 00:29:32.542 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:32.542 "strip_size_kb": 0, 00:29:32.542 "state": "online", 00:29:32.542 "raid_level": "raid1", 00:29:32.542 "superblock": true, 00:29:32.542 "num_base_bdevs": 2, 00:29:32.542 "num_base_bdevs_discovered": 2, 00:29:32.542 "num_base_bdevs_operational": 2, 00:29:32.542 "base_bdevs_list": [ 00:29:32.542 { 00:29:32.542 "name": "spare", 00:29:32.542 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:32.542 "is_configured": true, 00:29:32.542 "data_offset": 256, 00:29:32.542 "data_size": 7936 00:29:32.542 }, 00:29:32.542 { 00:29:32.542 "name": "BaseBdev2", 00:29:32.542 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:32.542 "is_configured": true, 00:29:32.542 "data_offset": 256, 00:29:32.542 "data_size": 7936 00:29:32.542 } 00:29:32.542 ] 00:29:32.542 }' 00:29:32.542 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.800 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.058 "name": "raid_bdev1", 00:29:33.058 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:33.058 "strip_size_kb": 0, 00:29:33.058 "state": "online", 00:29:33.058 "raid_level": "raid1", 00:29:33.058 "superblock": true, 00:29:33.058 "num_base_bdevs": 2, 00:29:33.058 "num_base_bdevs_discovered": 2, 00:29:33.058 "num_base_bdevs_operational": 2, 00:29:33.058 "base_bdevs_list": [ 00:29:33.058 { 00:29:33.058 "name": "spare", 00:29:33.058 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:33.058 "is_configured": true, 00:29:33.058 "data_offset": 256, 00:29:33.058 "data_size": 7936 00:29:33.058 }, 00:29:33.058 { 00:29:33.058 "name": "BaseBdev2", 00:29:33.058 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:33.058 "is_configured": true, 00:29:33.058 "data_offset": 256, 00:29:33.058 "data_size": 7936 00:29:33.058 } 00:29:33.058 ] 00:29:33.058 }' 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.058 10:39:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.316 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.316 "name": "raid_bdev1", 00:29:33.316 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:33.316 "strip_size_kb": 0, 00:29:33.316 "state": "online", 00:29:33.316 "raid_level": "raid1", 00:29:33.316 "superblock": true, 00:29:33.316 "num_base_bdevs": 2, 00:29:33.316 "num_base_bdevs_discovered": 2, 00:29:33.316 "num_base_bdevs_operational": 2, 00:29:33.316 "base_bdevs_list": [ 00:29:33.316 { 00:29:33.316 "name": "spare", 00:29:33.316 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:33.316 "is_configured": true, 00:29:33.316 "data_offset": 256, 00:29:33.316 "data_size": 7936 00:29:33.316 }, 00:29:33.316 { 00:29:33.316 "name": "BaseBdev2", 00:29:33.316 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:33.316 "is_configured": true, 00:29:33.316 "data_offset": 256, 00:29:33.316 "data_size": 7936 00:29:33.316 } 00:29:33.316 ] 00:29:33.316 }' 00:29:33.316 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.316 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:33.882 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:34.141 [2024-07-26 10:39:46.787420] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:34.141 [2024-07-26 10:39:46.787446] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:34.141 [2024-07-26 10:39:46.787505] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:34.141 [2024-07-26 10:39:46.787559] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:34.141 [2024-07-26 10:39:46.787569] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2252730 name raid_bdev1, state offline 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:34.141 10:39:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:34.400 /dev/nbd0 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.400 1+0 records in 00:29:34.400 1+0 records out 00:29:34.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225509 s, 18.2 MB/s 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:34.400 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:34.688 /dev/nbd1 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.688 1+0 records in 00:29:34.688 1+0 records out 00:29:34.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322006 s, 12.7 MB/s 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:34.688 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:34.954 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.955 10:39:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:29:35.213 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:35.471 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:35.730 [2024-07-26 10:39:48.458673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:35.730 [2024-07-26 10:39:48.458711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.730 [2024-07-26 10:39:48.458734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21e33e0 00:29:35.730 [2024-07-26 10:39:48.458745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.730 [2024-07-26 10:39:48.460071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.730 [2024-07-26 10:39:48.460098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:35.730 [2024-07-26 10:39:48.460157] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:35.730 [2024-07-26 10:39:48.460180] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:35.730 [2024-07-26 10:39:48.460266] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:35.730 spare 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.730 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.730 [2024-07-26 10:39:48.560565] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x21e2690 00:29:35.730 [2024-07-26 10:39:48.560578] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:29:35.730 [2024-07-26 10:39:48.560634] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22536a0 00:29:35.730 [2024-07-26 10:39:48.560736] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21e2690 00:29:35.730 [2024-07-26 10:39:48.560745] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21e2690 00:29:35.730 [2024-07-26 10:39:48.560811] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.989 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:35.989 "name": "raid_bdev1", 00:29:35.989 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:35.989 "strip_size_kb": 0, 00:29:35.989 "state": "online", 00:29:35.989 "raid_level": "raid1", 00:29:35.989 "superblock": true, 00:29:35.989 "num_base_bdevs": 2, 00:29:35.989 "num_base_bdevs_discovered": 2, 00:29:35.989 "num_base_bdevs_operational": 2, 00:29:35.989 "base_bdevs_list": [ 00:29:35.989 { 00:29:35.989 "name": "spare", 00:29:35.989 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:35.989 "is_configured": true, 00:29:35.989 "data_offset": 256, 00:29:35.989 "data_size": 7936 00:29:35.989 }, 00:29:35.989 { 00:29:35.989 "name": "BaseBdev2", 00:29:35.989 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:35.989 "is_configured": true, 00:29:35.989 "data_offset": 256, 00:29:35.989 "data_size": 7936 00:29:35.989 } 00:29:35.989 ] 00:29:35.989 }' 00:29:35.989 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:35.989 10:39:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.557 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.816 "name": "raid_bdev1", 00:29:36.816 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:36.816 "strip_size_kb": 0, 00:29:36.816 "state": "online", 00:29:36.816 "raid_level": "raid1", 00:29:36.816 "superblock": true, 00:29:36.816 "num_base_bdevs": 2, 00:29:36.816 "num_base_bdevs_discovered": 2, 00:29:36.816 "num_base_bdevs_operational": 2, 00:29:36.816 "base_bdevs_list": [ 00:29:36.816 { 00:29:36.816 "name": "spare", 00:29:36.816 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:36.816 "is_configured": true, 00:29:36.816 "data_offset": 256, 00:29:36.816 "data_size": 7936 00:29:36.816 }, 00:29:36.816 { 00:29:36.816 "name": "BaseBdev2", 00:29:36.816 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:36.816 "is_configured": true, 00:29:36.816 "data_offset": 256, 00:29:36.816 "data_size": 7936 00:29:36.816 } 00:29:36.816 ] 00:29:36.816 }' 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.816 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:37.076 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.076 10:39:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:37.335 [2024-07-26 10:39:50.055018] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.335 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.594 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.594 "name": "raid_bdev1", 00:29:37.594 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:37.594 "strip_size_kb": 0, 00:29:37.594 "state": "online", 00:29:37.594 "raid_level": "raid1", 00:29:37.594 "superblock": true, 00:29:37.594 "num_base_bdevs": 2, 00:29:37.594 "num_base_bdevs_discovered": 1, 00:29:37.594 "num_base_bdevs_operational": 1, 00:29:37.594 "base_bdevs_list": [ 00:29:37.594 { 00:29:37.594 "name": null, 00:29:37.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.594 "is_configured": false, 00:29:37.594 "data_offset": 256, 00:29:37.594 "data_size": 7936 00:29:37.594 }, 00:29:37.594 { 00:29:37.594 "name": "BaseBdev2", 00:29:37.594 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:37.594 "is_configured": true, 00:29:37.594 "data_offset": 256, 00:29:37.594 "data_size": 7936 00:29:37.594 } 00:29:37.594 ] 00:29:37.594 }' 00:29:37.594 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.594 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:38.162 10:39:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.421 [2024-07-26 10:39:51.105803] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.421 [2024-07-26 10:39:51.105933] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:38.421 [2024-07-26 10:39:51.105948] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:38.421 [2024-07-26 10:39:51.105974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.421 [2024-07-26 10:39:51.108022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e2c60 00:29:38.421 [2024-07-26 10:39:51.109259] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:38.421 10:39:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.358 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.618 "name": "raid_bdev1", 00:29:39.618 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:39.618 "strip_size_kb": 0, 00:29:39.618 "state": "online", 00:29:39.618 "raid_level": "raid1", 00:29:39.618 "superblock": true, 00:29:39.618 "num_base_bdevs": 2, 00:29:39.618 "num_base_bdevs_discovered": 2, 00:29:39.618 "num_base_bdevs_operational": 2, 00:29:39.618 "process": { 00:29:39.618 "type": "rebuild", 00:29:39.618 "target": "spare", 00:29:39.618 "progress": { 00:29:39.618 "blocks": 2816, 00:29:39.618 "percent": 35 00:29:39.618 } 00:29:39.618 }, 00:29:39.618 "base_bdevs_list": [ 00:29:39.618 { 00:29:39.618 "name": "spare", 00:29:39.618 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:39.618 "is_configured": true, 00:29:39.618 "data_offset": 256, 00:29:39.618 "data_size": 7936 00:29:39.618 }, 00:29:39.618 { 00:29:39.618 "name": "BaseBdev2", 00:29:39.618 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:39.618 "is_configured": true, 00:29:39.618 "data_offset": 256, 00:29:39.618 "data_size": 7936 00:29:39.618 } 00:29:39.618 ] 00:29:39.618 }' 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.618 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:39.877 [2024-07-26 10:39:52.594591] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.877 [2024-07-26 10:39:52.620349] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:39.877 [2024-07-26 10:39:52.620389] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.877 [2024-07-26 10:39:52.620403] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:39.878 [2024-07-26 10:39:52.620410] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.878 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.137 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.137 "name": "raid_bdev1", 00:29:40.137 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:40.137 "strip_size_kb": 0, 00:29:40.137 "state": "online", 00:29:40.137 "raid_level": "raid1", 00:29:40.137 "superblock": true, 00:29:40.137 "num_base_bdevs": 2, 00:29:40.137 "num_base_bdevs_discovered": 1, 00:29:40.137 "num_base_bdevs_operational": 1, 00:29:40.137 "base_bdevs_list": [ 00:29:40.137 { 00:29:40.137 "name": null, 00:29:40.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.137 "is_configured": false, 00:29:40.137 "data_offset": 256, 00:29:40.137 "data_size": 7936 00:29:40.137 }, 00:29:40.137 { 00:29:40.137 "name": "BaseBdev2", 00:29:40.137 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:40.137 "is_configured": true, 00:29:40.137 "data_offset": 256, 00:29:40.137 "data_size": 7936 00:29:40.137 } 00:29:40.137 ] 00:29:40.137 }' 00:29:40.137 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.137 10:39:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:40.705 10:39:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:40.705 [2024-07-26 10:39:53.581891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:40.705 [2024-07-26 10:39:53.581935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.705 [2024-07-26 10:39:53.581954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2252850 00:29:40.705 [2024-07-26 10:39:53.581966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.705 [2024-07-26 10:39:53.582170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.705 [2024-07-26 10:39:53.582185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:40.705 [2024-07-26 10:39:53.582237] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:40.705 [2024-07-26 10:39:53.582247] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:40.705 [2024-07-26 10:39:53.582257] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:40.705 [2024-07-26 10:39:53.582273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.705 [2024-07-26 10:39:53.584316] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e2c60 00:29:40.705 [2024-07-26 10:39:53.585540] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:40.705 spare 00:29:40.705 10:39:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.081 "name": "raid_bdev1", 00:29:42.081 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:42.081 "strip_size_kb": 0, 00:29:42.081 "state": "online", 00:29:42.081 "raid_level": "raid1", 00:29:42.081 "superblock": true, 00:29:42.081 "num_base_bdevs": 2, 00:29:42.081 "num_base_bdevs_discovered": 2, 00:29:42.081 "num_base_bdevs_operational": 2, 00:29:42.081 "process": { 00:29:42.081 "type": "rebuild", 00:29:42.081 "target": "spare", 00:29:42.081 "progress": { 00:29:42.081 "blocks": 3072, 00:29:42.081 "percent": 38 00:29:42.081 } 00:29:42.081 }, 00:29:42.081 "base_bdevs_list": [ 00:29:42.081 { 00:29:42.081 "name": "spare", 00:29:42.081 "uuid": "e8895268-d9b6-5174-96fe-0e75512552eb", 00:29:42.081 "is_configured": true, 00:29:42.081 "data_offset": 256, 00:29:42.081 "data_size": 7936 00:29:42.081 }, 00:29:42.081 { 00:29:42.081 "name": "BaseBdev2", 00:29:42.081 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:42.081 "is_configured": true, 00:29:42.081 "data_offset": 256, 00:29:42.081 "data_size": 7936 00:29:42.081 } 00:29:42.081 ] 00:29:42.081 }' 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.081 10:39:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:42.341 [2024-07-26 10:39:55.143475] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.341 [2024-07-26 10:39:55.197307] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:42.341 [2024-07-26 10:39:55.197346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.341 [2024-07-26 10:39:55.197360] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.341 [2024-07-26 10:39:55.197367] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.341 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.600 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.600 "name": "raid_bdev1", 00:29:42.600 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:42.600 "strip_size_kb": 0, 00:29:42.600 "state": "online", 00:29:42.600 "raid_level": "raid1", 00:29:42.600 "superblock": true, 00:29:42.600 "num_base_bdevs": 2, 00:29:42.600 "num_base_bdevs_discovered": 1, 00:29:42.601 "num_base_bdevs_operational": 1, 00:29:42.601 "base_bdevs_list": [ 00:29:42.601 { 00:29:42.601 "name": null, 00:29:42.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.601 "is_configured": false, 00:29:42.601 "data_offset": 256, 00:29:42.601 "data_size": 7936 00:29:42.601 }, 00:29:42.601 { 00:29:42.601 "name": "BaseBdev2", 00:29:42.601 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:42.601 "is_configured": true, 00:29:42.601 "data_offset": 256, 00:29:42.601 "data_size": 7936 00:29:42.601 } 00:29:42.601 ] 00:29:42.601 }' 00:29:42.601 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.601 10:39:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.170 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.429 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.429 "name": "raid_bdev1", 00:29:43.429 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:43.429 "strip_size_kb": 0, 00:29:43.429 "state": "online", 00:29:43.429 "raid_level": "raid1", 00:29:43.429 "superblock": true, 00:29:43.429 "num_base_bdevs": 2, 00:29:43.429 "num_base_bdevs_discovered": 1, 00:29:43.429 "num_base_bdevs_operational": 1, 00:29:43.429 "base_bdevs_list": [ 00:29:43.429 { 00:29:43.429 "name": null, 00:29:43.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.429 "is_configured": false, 00:29:43.429 "data_offset": 256, 00:29:43.429 "data_size": 7936 00:29:43.429 }, 00:29:43.429 { 00:29:43.429 "name": "BaseBdev2", 00:29:43.429 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:43.429 "is_configured": true, 00:29:43.429 "data_offset": 256, 00:29:43.429 "data_size": 7936 00:29:43.429 } 00:29:43.429 ] 00:29:43.429 }' 00:29:43.429 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.429 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:43.429 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.689 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:43.689 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:43.689 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:43.948 [2024-07-26 10:39:56.800435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:43.948 [2024-07-26 10:39:56.800479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.948 [2024-07-26 10:39:56.800498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2299ac0 00:29:43.948 [2024-07-26 10:39:56.800509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.948 [2024-07-26 10:39:56.800677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.948 [2024-07-26 10:39:56.800691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:43.948 [2024-07-26 10:39:56.800731] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:43.948 [2024-07-26 10:39:56.800741] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:43.948 [2024-07-26 10:39:56.800751] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:43.948 BaseBdev1 00:29:43.949 10:39:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.330 10:39:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.330 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.330 "name": "raid_bdev1", 00:29:45.330 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:45.330 "strip_size_kb": 0, 00:29:45.330 "state": "online", 00:29:45.330 "raid_level": "raid1", 00:29:45.330 "superblock": true, 00:29:45.330 "num_base_bdevs": 2, 00:29:45.330 "num_base_bdevs_discovered": 1, 00:29:45.330 "num_base_bdevs_operational": 1, 00:29:45.330 "base_bdevs_list": [ 00:29:45.330 { 00:29:45.330 "name": null, 00:29:45.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.330 "is_configured": false, 00:29:45.330 "data_offset": 256, 00:29:45.330 "data_size": 7936 00:29:45.330 }, 00:29:45.330 { 00:29:45.330 "name": "BaseBdev2", 00:29:45.330 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:45.330 "is_configured": true, 00:29:45.330 "data_offset": 256, 00:29:45.330 "data_size": 7936 00:29:45.330 } 00:29:45.330 ] 00:29:45.330 }' 00:29:45.330 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.330 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.898 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.157 "name": "raid_bdev1", 00:29:46.157 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:46.157 "strip_size_kb": 0, 00:29:46.157 "state": "online", 00:29:46.157 "raid_level": "raid1", 00:29:46.157 "superblock": true, 00:29:46.157 "num_base_bdevs": 2, 00:29:46.157 "num_base_bdevs_discovered": 1, 00:29:46.157 "num_base_bdevs_operational": 1, 00:29:46.157 "base_bdevs_list": [ 00:29:46.157 { 00:29:46.157 "name": null, 00:29:46.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.157 "is_configured": false, 00:29:46.157 "data_offset": 256, 00:29:46.157 "data_size": 7936 00:29:46.157 }, 00:29:46.157 { 00:29:46.157 "name": "BaseBdev2", 00:29:46.157 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:46.157 "is_configured": true, 00:29:46.157 "data_offset": 256, 00:29:46.157 "data_size": 7936 00:29:46.157 } 00:29:46.157 ] 00:29:46.157 }' 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:46.157 10:39:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:46.415 [2024-07-26 10:39:59.146607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.415 [2024-07-26 10:39:59.146715] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:46.415 [2024-07-26 10:39:59.146728] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:46.415 request: 00:29:46.415 { 00:29:46.415 "base_bdev": "BaseBdev1", 00:29:46.415 "raid_bdev": "raid_bdev1", 00:29:46.415 "method": "bdev_raid_add_base_bdev", 00:29:46.415 "req_id": 1 00:29:46.415 } 00:29:46.415 Got JSON-RPC error response 00:29:46.415 response: 00:29:46.415 { 00:29:46.415 "code": -22, 00:29:46.415 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:46.415 } 00:29:46.415 10:39:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:29:46.415 10:39:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.415 10:39:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.415 10:39:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.415 10:39:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.351 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.610 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.610 "name": "raid_bdev1", 00:29:47.610 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:47.610 "strip_size_kb": 0, 00:29:47.610 "state": "online", 00:29:47.610 "raid_level": "raid1", 00:29:47.610 "superblock": true, 00:29:47.610 "num_base_bdevs": 2, 00:29:47.610 "num_base_bdevs_discovered": 1, 00:29:47.610 "num_base_bdevs_operational": 1, 00:29:47.610 "base_bdevs_list": [ 00:29:47.610 { 00:29:47.610 "name": null, 00:29:47.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.610 "is_configured": false, 00:29:47.610 "data_offset": 256, 00:29:47.610 "data_size": 7936 00:29:47.610 }, 00:29:47.610 { 00:29:47.610 "name": "BaseBdev2", 00:29:47.610 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:47.610 "is_configured": true, 00:29:47.610 "data_offset": 256, 00:29:47.610 "data_size": 7936 00:29:47.610 } 00:29:47.610 ] 00:29:47.610 }' 00:29:47.610 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.610 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.178 10:40:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.437 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.437 "name": "raid_bdev1", 00:29:48.437 "uuid": "a6b356c2-2f85-4a02-9848-035256b77885", 00:29:48.437 "strip_size_kb": 0, 00:29:48.437 "state": "online", 00:29:48.437 "raid_level": "raid1", 00:29:48.437 "superblock": true, 00:29:48.437 "num_base_bdevs": 2, 00:29:48.437 "num_base_bdevs_discovered": 1, 00:29:48.437 "num_base_bdevs_operational": 1, 00:29:48.437 "base_bdevs_list": [ 00:29:48.437 { 00:29:48.437 "name": null, 00:29:48.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.437 "is_configured": false, 00:29:48.437 "data_offset": 256, 00:29:48.437 "data_size": 7936 00:29:48.437 }, 00:29:48.437 { 00:29:48.437 "name": "BaseBdev2", 00:29:48.437 "uuid": "32284fec-949d-53c7-872b-c6e9106763c6", 00:29:48.437 "is_configured": true, 00:29:48.437 "data_offset": 256, 00:29:48.437 "data_size": 7936 00:29:48.437 } 00:29:48.437 ] 00:29:48.437 }' 00:29:48.437 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 3526416 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3526416 ']' 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 3526416 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.438 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3526416 00:29:48.697 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.697 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.697 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3526416' 00:29:48.697 killing process with pid 3526416 00:29:48.697 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 3526416 00:29:48.697 Received shutdown signal, test time was about 60.000000 seconds 00:29:48.697 00:29:48.697 Latency(us) 00:29:48.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.698 =================================================================================================================== 00:29:48.698 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:48.698 [2024-07-26 10:40:01.355273] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:48.698 [2024-07-26 10:40:01.355354] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:48.698 [2024-07-26 10:40:01.355396] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:48.698 [2024-07-26 10:40:01.355408] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21e2690 name raid_bdev1, state offline 00:29:48.698 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 3526416 00:29:48.698 [2024-07-26 10:40:01.383054] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:48.698 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:29:48.698 00:29:48.698 real 0m29.835s 00:29:48.698 user 0m46.127s 00:29:48.698 sys 0m4.811s 00:29:48.698 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.698 10:40:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:29:48.698 ************************************ 00:29:48.698 END TEST raid_rebuild_test_sb_md_separate 00:29:48.698 ************************************ 00:29:48.957 10:40:01 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:29:48.957 10:40:01 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:29:48.957 10:40:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:48.957 10:40:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.957 10:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:48.957 ************************************ 00:29:48.957 START TEST raid_state_function_test_sb_md_interleaved 00:29:48.957 ************************************ 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=3531884 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3531884' 00:29:48.957 Process raid pid: 3531884 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 3531884 /var/tmp/spdk-raid.sock 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3531884 ']' 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:48.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.957 10:40:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:48.957 [2024-07-26 10:40:01.697012] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:29:48.957 [2024-07-26 10:40:01.697066] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:48.957 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:48.957 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:48.957 [2024-07-26 10:40:01.832135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.218 [2024-07-26 10:40:01.876374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.218 [2024-07-26 10:40:01.930284] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:49.218 [2024-07-26 10:40:01.930311] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:49.789 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.789 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:29:49.789 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:50.092 [2024-07-26 10:40:02.809615] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:50.092 [2024-07-26 10:40:02.809655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:50.092 [2024-07-26 10:40:02.809665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:50.092 [2024-07-26 10:40:02.809676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.092 10:40:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:50.350 10:40:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.350 "name": "Existed_Raid", 00:29:50.350 "uuid": "6c460446-4065-41e8-9da4-aa651101a974", 00:29:50.350 "strip_size_kb": 0, 00:29:50.350 "state": "configuring", 00:29:50.350 "raid_level": "raid1", 00:29:50.350 "superblock": true, 00:29:50.350 "num_base_bdevs": 2, 00:29:50.350 "num_base_bdevs_discovered": 0, 00:29:50.350 "num_base_bdevs_operational": 2, 00:29:50.350 "base_bdevs_list": [ 00:29:50.350 { 00:29:50.350 "name": "BaseBdev1", 00:29:50.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.350 "is_configured": false, 00:29:50.350 "data_offset": 0, 00:29:50.350 "data_size": 0 00:29:50.350 }, 00:29:50.350 { 00:29:50.350 "name": "BaseBdev2", 00:29:50.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.350 "is_configured": false, 00:29:50.350 "data_offset": 0, 00:29:50.350 "data_size": 0 00:29:50.350 } 00:29:50.350 ] 00:29:50.350 }' 00:29:50.350 10:40:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.350 10:40:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:50.917 10:40:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:51.175 [2024-07-26 10:40:03.840214] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:51.175 [2024-07-26 10:40:03.840243] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22e1ce0 name Existed_Raid, state configuring 00:29:51.175 10:40:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:51.175 [2024-07-26 10:40:04.068830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:51.175 [2024-07-26 10:40:04.068857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:51.175 [2024-07-26 10:40:04.068866] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:51.175 [2024-07-26 10:40:04.068877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:29:51.433 [2024-07-26 10:40:04.306926] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:51.433 BaseBdev1 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:51.433 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:51.691 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:51.950 [ 00:29:51.950 { 00:29:51.950 "name": "BaseBdev1", 00:29:51.950 "aliases": [ 00:29:51.950 "2b4eae34-8674-4c5a-a812-fa98fcae022f" 00:29:51.950 ], 00:29:51.950 "product_name": "Malloc disk", 00:29:51.950 "block_size": 4128, 00:29:51.950 "num_blocks": 8192, 00:29:51.950 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:51.950 "md_size": 32, 00:29:51.950 "md_interleave": true, 00:29:51.950 "dif_type": 0, 00:29:51.950 "assigned_rate_limits": { 00:29:51.950 "rw_ios_per_sec": 0, 00:29:51.950 "rw_mbytes_per_sec": 0, 00:29:51.950 "r_mbytes_per_sec": 0, 00:29:51.950 "w_mbytes_per_sec": 0 00:29:51.950 }, 00:29:51.950 "claimed": true, 00:29:51.950 "claim_type": "exclusive_write", 00:29:51.950 "zoned": false, 00:29:51.950 "supported_io_types": { 00:29:51.950 "read": true, 00:29:51.950 "write": true, 00:29:51.950 "unmap": true, 00:29:51.950 "flush": true, 00:29:51.950 "reset": true, 00:29:51.950 "nvme_admin": false, 00:29:51.950 "nvme_io": false, 00:29:51.950 "nvme_io_md": false, 00:29:51.950 "write_zeroes": true, 00:29:51.950 "zcopy": true, 00:29:51.950 "get_zone_info": false, 00:29:51.950 "zone_management": false, 00:29:51.950 "zone_append": false, 00:29:51.950 "compare": false, 00:29:51.950 "compare_and_write": false, 00:29:51.950 "abort": true, 00:29:51.950 "seek_hole": false, 00:29:51.950 "seek_data": false, 00:29:51.950 "copy": true, 00:29:51.950 "nvme_iov_md": false 00:29:51.950 }, 00:29:51.950 "memory_domains": [ 00:29:51.950 { 00:29:51.950 "dma_device_id": "system", 00:29:51.950 "dma_device_type": 1 00:29:51.950 }, 00:29:51.950 { 00:29:51.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:51.950 "dma_device_type": 2 00:29:51.950 } 00:29:51.950 ], 00:29:51.950 "driver_specific": {} 00:29:51.950 } 00:29:51.950 ] 00:29:51.950 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:29:51.950 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:51.950 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.951 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:52.210 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:52.210 "name": "Existed_Raid", 00:29:52.210 "uuid": "669a34d9-4c29-470b-83b5-8195993a9a16", 00:29:52.210 "strip_size_kb": 0, 00:29:52.210 "state": "configuring", 00:29:52.210 "raid_level": "raid1", 00:29:52.210 "superblock": true, 00:29:52.210 "num_base_bdevs": 2, 00:29:52.210 "num_base_bdevs_discovered": 1, 00:29:52.210 "num_base_bdevs_operational": 2, 00:29:52.210 "base_bdevs_list": [ 00:29:52.210 { 00:29:52.210 "name": "BaseBdev1", 00:29:52.210 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:52.210 "is_configured": true, 00:29:52.210 "data_offset": 256, 00:29:52.210 "data_size": 7936 00:29:52.210 }, 00:29:52.210 { 00:29:52.210 "name": "BaseBdev2", 00:29:52.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.210 "is_configured": false, 00:29:52.210 "data_offset": 0, 00:29:52.210 "data_size": 0 00:29:52.210 } 00:29:52.210 ] 00:29:52.210 }' 00:29:52.210 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:52.210 10:40:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:52.777 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:29:53.036 [2024-07-26 10:40:05.722667] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:53.036 [2024-07-26 10:40:05.722705] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22e1610 name Existed_Raid, state configuring 00:29:53.036 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:29:53.295 [2024-07-26 10:40:05.951310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:53.295 [2024-07-26 10:40:05.952603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:53.295 [2024-07-26 10:40:05.952641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.295 10:40:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:53.554 10:40:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.554 "name": "Existed_Raid", 00:29:53.554 "uuid": "0550035e-52d5-454c-a8b5-17a6a3c7a9f2", 00:29:53.554 "strip_size_kb": 0, 00:29:53.554 "state": "configuring", 00:29:53.554 "raid_level": "raid1", 00:29:53.554 "superblock": true, 00:29:53.554 "num_base_bdevs": 2, 00:29:53.554 "num_base_bdevs_discovered": 1, 00:29:53.554 "num_base_bdevs_operational": 2, 00:29:53.554 "base_bdevs_list": [ 00:29:53.554 { 00:29:53.554 "name": "BaseBdev1", 00:29:53.554 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:53.554 "is_configured": true, 00:29:53.554 "data_offset": 256, 00:29:53.554 "data_size": 7936 00:29:53.554 }, 00:29:53.554 { 00:29:53.554 "name": "BaseBdev2", 00:29:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.554 "is_configured": false, 00:29:53.554 "data_offset": 0, 00:29:53.554 "data_size": 0 00:29:53.554 } 00:29:53.554 ] 00:29:53.554 }' 00:29:53.554 10:40:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.554 10:40:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:54.122 10:40:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:29:54.122 [2024-07-26 10:40:06.993304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:54.122 [2024-07-26 10:40:06.993425] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x247d7f0 00:29:54.122 [2024-07-26 10:40:06.993437] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:29:54.122 [2024-07-26 10:40:06.993490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x247ebd0 00:29:54.122 [2024-07-26 10:40:06.993554] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x247d7f0 00:29:54.122 [2024-07-26 10:40:06.993563] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x247d7f0 00:29:54.122 [2024-07-26 10:40:06.993613] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.122 BaseBdev2 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:54.123 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:29:54.382 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:54.641 [ 00:29:54.641 { 00:29:54.641 "name": "BaseBdev2", 00:29:54.641 "aliases": [ 00:29:54.641 "e41b622f-fa5e-436c-b158-901b44a66b99" 00:29:54.641 ], 00:29:54.641 "product_name": "Malloc disk", 00:29:54.641 "block_size": 4128, 00:29:54.641 "num_blocks": 8192, 00:29:54.641 "uuid": "e41b622f-fa5e-436c-b158-901b44a66b99", 00:29:54.641 "md_size": 32, 00:29:54.641 "md_interleave": true, 00:29:54.641 "dif_type": 0, 00:29:54.641 "assigned_rate_limits": { 00:29:54.641 "rw_ios_per_sec": 0, 00:29:54.641 "rw_mbytes_per_sec": 0, 00:29:54.641 "r_mbytes_per_sec": 0, 00:29:54.641 "w_mbytes_per_sec": 0 00:29:54.641 }, 00:29:54.641 "claimed": true, 00:29:54.641 "claim_type": "exclusive_write", 00:29:54.641 "zoned": false, 00:29:54.641 "supported_io_types": { 00:29:54.641 "read": true, 00:29:54.641 "write": true, 00:29:54.641 "unmap": true, 00:29:54.641 "flush": true, 00:29:54.641 "reset": true, 00:29:54.641 "nvme_admin": false, 00:29:54.641 "nvme_io": false, 00:29:54.641 "nvme_io_md": false, 00:29:54.641 "write_zeroes": true, 00:29:54.641 "zcopy": true, 00:29:54.641 "get_zone_info": false, 00:29:54.641 "zone_management": false, 00:29:54.641 "zone_append": false, 00:29:54.641 "compare": false, 00:29:54.641 "compare_and_write": false, 00:29:54.641 "abort": true, 00:29:54.641 "seek_hole": false, 00:29:54.641 "seek_data": false, 00:29:54.641 "copy": true, 00:29:54.641 "nvme_iov_md": false 00:29:54.641 }, 00:29:54.641 "memory_domains": [ 00:29:54.641 { 00:29:54.641 "dma_device_id": "system", 00:29:54.641 "dma_device_type": 1 00:29:54.641 }, 00:29:54.641 { 00:29:54.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:54.641 "dma_device_type": 2 00:29:54.641 } 00:29:54.641 ], 00:29:54.641 "driver_specific": {} 00:29:54.641 } 00:29:54.641 ] 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:54.641 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.642 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:54.901 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.901 "name": "Existed_Raid", 00:29:54.901 "uuid": "0550035e-52d5-454c-a8b5-17a6a3c7a9f2", 00:29:54.901 "strip_size_kb": 0, 00:29:54.901 "state": "online", 00:29:54.901 "raid_level": "raid1", 00:29:54.901 "superblock": true, 00:29:54.901 "num_base_bdevs": 2, 00:29:54.901 "num_base_bdevs_discovered": 2, 00:29:54.901 "num_base_bdevs_operational": 2, 00:29:54.901 "base_bdevs_list": [ 00:29:54.901 { 00:29:54.901 "name": "BaseBdev1", 00:29:54.901 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:54.901 "is_configured": true, 00:29:54.901 "data_offset": 256, 00:29:54.901 "data_size": 7936 00:29:54.901 }, 00:29:54.901 { 00:29:54.901 "name": "BaseBdev2", 00:29:54.901 "uuid": "e41b622f-fa5e-436c-b158-901b44a66b99", 00:29:54.901 "is_configured": true, 00:29:54.901 "data_offset": 256, 00:29:54.901 "data_size": 7936 00:29:54.901 } 00:29:54.901 ] 00:29:54.901 }' 00:29:54.901 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.901 10:40:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:29:55.467 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:55.725 [2024-07-26 10:40:08.477500] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:55.725 "name": "Existed_Raid", 00:29:55.725 "aliases": [ 00:29:55.725 "0550035e-52d5-454c-a8b5-17a6a3c7a9f2" 00:29:55.725 ], 00:29:55.725 "product_name": "Raid Volume", 00:29:55.725 "block_size": 4128, 00:29:55.725 "num_blocks": 7936, 00:29:55.725 "uuid": "0550035e-52d5-454c-a8b5-17a6a3c7a9f2", 00:29:55.725 "md_size": 32, 00:29:55.725 "md_interleave": true, 00:29:55.725 "dif_type": 0, 00:29:55.725 "assigned_rate_limits": { 00:29:55.725 "rw_ios_per_sec": 0, 00:29:55.725 "rw_mbytes_per_sec": 0, 00:29:55.725 "r_mbytes_per_sec": 0, 00:29:55.725 "w_mbytes_per_sec": 0 00:29:55.725 }, 00:29:55.725 "claimed": false, 00:29:55.725 "zoned": false, 00:29:55.725 "supported_io_types": { 00:29:55.725 "read": true, 00:29:55.725 "write": true, 00:29:55.725 "unmap": false, 00:29:55.725 "flush": false, 00:29:55.725 "reset": true, 00:29:55.725 "nvme_admin": false, 00:29:55.725 "nvme_io": false, 00:29:55.725 "nvme_io_md": false, 00:29:55.725 "write_zeroes": true, 00:29:55.725 "zcopy": false, 00:29:55.725 "get_zone_info": false, 00:29:55.725 "zone_management": false, 00:29:55.725 "zone_append": false, 00:29:55.725 "compare": false, 00:29:55.725 "compare_and_write": false, 00:29:55.725 "abort": false, 00:29:55.725 "seek_hole": false, 00:29:55.725 "seek_data": false, 00:29:55.725 "copy": false, 00:29:55.725 "nvme_iov_md": false 00:29:55.725 }, 00:29:55.725 "memory_domains": [ 00:29:55.725 { 00:29:55.725 "dma_device_id": "system", 00:29:55.725 "dma_device_type": 1 00:29:55.725 }, 00:29:55.725 { 00:29:55.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:55.725 "dma_device_type": 2 00:29:55.725 }, 00:29:55.725 { 00:29:55.725 "dma_device_id": "system", 00:29:55.725 "dma_device_type": 1 00:29:55.725 }, 00:29:55.725 { 00:29:55.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:55.725 "dma_device_type": 2 00:29:55.725 } 00:29:55.725 ], 00:29:55.725 "driver_specific": { 00:29:55.725 "raid": { 00:29:55.725 "uuid": "0550035e-52d5-454c-a8b5-17a6a3c7a9f2", 00:29:55.725 "strip_size_kb": 0, 00:29:55.725 "state": "online", 00:29:55.725 "raid_level": "raid1", 00:29:55.725 "superblock": true, 00:29:55.725 "num_base_bdevs": 2, 00:29:55.725 "num_base_bdevs_discovered": 2, 00:29:55.725 "num_base_bdevs_operational": 2, 00:29:55.725 "base_bdevs_list": [ 00:29:55.725 { 00:29:55.725 "name": "BaseBdev1", 00:29:55.725 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:55.725 "is_configured": true, 00:29:55.725 "data_offset": 256, 00:29:55.725 "data_size": 7936 00:29:55.725 }, 00:29:55.725 { 00:29:55.725 "name": "BaseBdev2", 00:29:55.725 "uuid": "e41b622f-fa5e-436c-b158-901b44a66b99", 00:29:55.725 "is_configured": true, 00:29:55.725 "data_offset": 256, 00:29:55.725 "data_size": 7936 00:29:55.725 } 00:29:55.725 ] 00:29:55.725 } 00:29:55.725 } 00:29:55.725 }' 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:29:55.725 BaseBdev2' 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:29:55.725 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:55.984 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:55.984 "name": "BaseBdev1", 00:29:55.984 "aliases": [ 00:29:55.984 "2b4eae34-8674-4c5a-a812-fa98fcae022f" 00:29:55.984 ], 00:29:55.984 "product_name": "Malloc disk", 00:29:55.984 "block_size": 4128, 00:29:55.984 "num_blocks": 8192, 00:29:55.984 "uuid": "2b4eae34-8674-4c5a-a812-fa98fcae022f", 00:29:55.984 "md_size": 32, 00:29:55.984 "md_interleave": true, 00:29:55.984 "dif_type": 0, 00:29:55.984 "assigned_rate_limits": { 00:29:55.984 "rw_ios_per_sec": 0, 00:29:55.984 "rw_mbytes_per_sec": 0, 00:29:55.984 "r_mbytes_per_sec": 0, 00:29:55.984 "w_mbytes_per_sec": 0 00:29:55.984 }, 00:29:55.984 "claimed": true, 00:29:55.984 "claim_type": "exclusive_write", 00:29:55.984 "zoned": false, 00:29:55.984 "supported_io_types": { 00:29:55.984 "read": true, 00:29:55.984 "write": true, 00:29:55.984 "unmap": true, 00:29:55.984 "flush": true, 00:29:55.984 "reset": true, 00:29:55.984 "nvme_admin": false, 00:29:55.984 "nvme_io": false, 00:29:55.984 "nvme_io_md": false, 00:29:55.984 "write_zeroes": true, 00:29:55.984 "zcopy": true, 00:29:55.984 "get_zone_info": false, 00:29:55.984 "zone_management": false, 00:29:55.984 "zone_append": false, 00:29:55.984 "compare": false, 00:29:55.984 "compare_and_write": false, 00:29:55.984 "abort": true, 00:29:55.984 "seek_hole": false, 00:29:55.984 "seek_data": false, 00:29:55.984 "copy": true, 00:29:55.984 "nvme_iov_md": false 00:29:55.984 }, 00:29:55.984 "memory_domains": [ 00:29:55.984 { 00:29:55.984 "dma_device_id": "system", 00:29:55.984 "dma_device_type": 1 00:29:55.984 }, 00:29:55.984 { 00:29:55.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:55.984 "dma_device_type": 2 00:29:55.984 } 00:29:55.984 ], 00:29:55.984 "driver_specific": {} 00:29:55.984 }' 00:29:55.984 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:55.984 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:55.984 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:55.984 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:56.242 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:56.242 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:56.242 10:40:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:56.242 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:56.242 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:56.242 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:56.242 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:56.242 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:56.243 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:56.243 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:29:56.243 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:56.500 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:56.500 "name": "BaseBdev2", 00:29:56.500 "aliases": [ 00:29:56.500 "e41b622f-fa5e-436c-b158-901b44a66b99" 00:29:56.500 ], 00:29:56.500 "product_name": "Malloc disk", 00:29:56.500 "block_size": 4128, 00:29:56.500 "num_blocks": 8192, 00:29:56.500 "uuid": "e41b622f-fa5e-436c-b158-901b44a66b99", 00:29:56.500 "md_size": 32, 00:29:56.500 "md_interleave": true, 00:29:56.500 "dif_type": 0, 00:29:56.500 "assigned_rate_limits": { 00:29:56.500 "rw_ios_per_sec": 0, 00:29:56.500 "rw_mbytes_per_sec": 0, 00:29:56.500 "r_mbytes_per_sec": 0, 00:29:56.500 "w_mbytes_per_sec": 0 00:29:56.500 }, 00:29:56.500 "claimed": true, 00:29:56.500 "claim_type": "exclusive_write", 00:29:56.500 "zoned": false, 00:29:56.500 "supported_io_types": { 00:29:56.500 "read": true, 00:29:56.500 "write": true, 00:29:56.500 "unmap": true, 00:29:56.500 "flush": true, 00:29:56.500 "reset": true, 00:29:56.500 "nvme_admin": false, 00:29:56.500 "nvme_io": false, 00:29:56.500 "nvme_io_md": false, 00:29:56.500 "write_zeroes": true, 00:29:56.500 "zcopy": true, 00:29:56.500 "get_zone_info": false, 00:29:56.501 "zone_management": false, 00:29:56.501 "zone_append": false, 00:29:56.501 "compare": false, 00:29:56.501 "compare_and_write": false, 00:29:56.501 "abort": true, 00:29:56.501 "seek_hole": false, 00:29:56.501 "seek_data": false, 00:29:56.501 "copy": true, 00:29:56.501 "nvme_iov_md": false 00:29:56.501 }, 00:29:56.501 "memory_domains": [ 00:29:56.501 { 00:29:56.501 "dma_device_id": "system", 00:29:56.501 "dma_device_type": 1 00:29:56.501 }, 00:29:56.501 { 00:29:56.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.501 "dma_device_type": 2 00:29:56.501 } 00:29:56.501 ], 00:29:56.501 "driver_specific": {} 00:29:56.501 }' 00:29:56.501 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:56.501 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:56.758 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:29:56.758 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:56.758 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:56.758 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:29:56.759 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:56.759 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:56.759 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:29:56.759 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:56.759 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:29:57.017 [2024-07-26 10:40:09.868988] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.017 10:40:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:57.274 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.274 "name": "Existed_Raid", 00:29:57.274 "uuid": "0550035e-52d5-454c-a8b5-17a6a3c7a9f2", 00:29:57.274 "strip_size_kb": 0, 00:29:57.274 "state": "online", 00:29:57.274 "raid_level": "raid1", 00:29:57.274 "superblock": true, 00:29:57.274 "num_base_bdevs": 2, 00:29:57.274 "num_base_bdevs_discovered": 1, 00:29:57.274 "num_base_bdevs_operational": 1, 00:29:57.274 "base_bdevs_list": [ 00:29:57.274 { 00:29:57.274 "name": null, 00:29:57.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.274 "is_configured": false, 00:29:57.274 "data_offset": 256, 00:29:57.274 "data_size": 7936 00:29:57.274 }, 00:29:57.274 { 00:29:57.274 "name": "BaseBdev2", 00:29:57.274 "uuid": "e41b622f-fa5e-436c-b158-901b44a66b99", 00:29:57.274 "is_configured": true, 00:29:57.274 "data_offset": 256, 00:29:57.275 "data_size": 7936 00:29:57.275 } 00:29:57.275 ] 00:29:57.275 }' 00:29:57.275 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.275 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:57.841 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:29:57.841 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:57.841 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.841 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:29:58.099 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:29:58.099 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:58.099 10:40:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:29:58.358 [2024-07-26 10:40:11.109363] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:58.358 [2024-07-26 10:40:11.109440] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:58.358 [2024-07-26 10:40:11.120222] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:58.358 [2024-07-26 10:40:11.120252] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:58.358 [2024-07-26 10:40:11.120262] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x247d7f0 name Existed_Raid, state offline 00:29:58.358 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:29:58.358 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:29:58.358 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.358 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 3531884 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3531884 ']' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3531884 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531884 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531884' 00:29:58.617 killing process with pid 3531884 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 3531884 00:29:58.617 [2024-07-26 10:40:11.430390] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:58.617 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 3531884 00:29:58.617 [2024-07-26 10:40:11.431218] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:58.876 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:29:58.876 00:29:58.876 real 0m9.972s 00:29:58.876 user 0m17.706s 00:29:58.876 sys 0m1.925s 00:29:58.876 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.876 10:40:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:58.876 ************************************ 00:29:58.876 END TEST raid_state_function_test_sb_md_interleaved 00:29:58.876 ************************************ 00:29:58.876 10:40:11 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:29:58.876 10:40:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:58.876 10:40:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.876 10:40:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:58.876 ************************************ 00:29:58.876 START TEST raid_superblock_test_md_interleaved 00:29:58.876 ************************************ 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=3533703 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 3533703 /var/tmp/spdk-raid.sock 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3533703 ']' 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:58.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.876 10:40:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:29:58.876 [2024-07-26 10:40:11.745623] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:29:58.876 [2024-07-26 10:40:11.745677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533703 ] 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:59.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:59.135 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:59.135 [2024-07-26 10:40:11.878014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.135 [2024-07-26 10:40:11.922016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.135 [2024-07-26 10:40:11.986280] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:59.135 [2024-07-26 10:40:11.986321] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:30:00.070 malloc1 00:30:00.070 10:40:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:00.329 [2024-07-26 10:40:13.065320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:00.329 [2024-07-26 10:40:13.065363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.329 [2024-07-26 10:40:13.065383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20754a0 00:30:00.329 [2024-07-26 10:40:13.065394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.329 [2024-07-26 10:40:13.066660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.329 [2024-07-26 10:40:13.066685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:00.329 pt1 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:00.329 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:30:00.588 malloc2 00:30:00.588 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:00.845 [2024-07-26 10:40:13.527369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:00.845 [2024-07-26 10:40:13.527411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.845 [2024-07-26 10:40:13.527427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202fd10 00:30:00.845 [2024-07-26 10:40:13.527439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.845 [2024-07-26 10:40:13.528747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.845 [2024-07-26 10:40:13.528772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:00.845 pt2 00:30:00.845 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:00.845 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:00.845 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:30:01.103 [2024-07-26 10:40:13.751978] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:01.103 [2024-07-26 10:40:13.753473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:01.103 [2024-07-26 10:40:13.753615] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x201bb40 00:30:01.103 [2024-07-26 10:40:13.753627] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:01.103 [2024-07-26 10:40:13.753702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x201ec00 00:30:01.103 [2024-07-26 10:40:13.753773] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201bb40 00:30:01.103 [2024-07-26 10:40:13.753782] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x201bb40 00:30:01.103 [2024-07-26 10:40:13.753847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.103 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.103 "name": "raid_bdev1", 00:30:01.103 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:01.103 "strip_size_kb": 0, 00:30:01.103 "state": "online", 00:30:01.103 "raid_level": "raid1", 00:30:01.103 "superblock": true, 00:30:01.103 "num_base_bdevs": 2, 00:30:01.103 "num_base_bdevs_discovered": 2, 00:30:01.103 "num_base_bdevs_operational": 2, 00:30:01.103 "base_bdevs_list": [ 00:30:01.103 { 00:30:01.103 "name": "pt1", 00:30:01.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:01.103 "is_configured": true, 00:30:01.104 "data_offset": 256, 00:30:01.104 "data_size": 7936 00:30:01.104 }, 00:30:01.104 { 00:30:01.104 "name": "pt2", 00:30:01.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:01.104 "is_configured": true, 00:30:01.104 "data_offset": 256, 00:30:01.104 "data_size": 7936 00:30:01.104 } 00:30:01.104 ] 00:30:01.104 }' 00:30:01.104 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.104 10:40:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:01.669 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:01.927 [2024-07-26 10:40:14.770862] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:01.927 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:01.927 "name": "raid_bdev1", 00:30:01.927 "aliases": [ 00:30:01.927 "76148d02-28f9-43bf-84c9-c02623991eef" 00:30:01.927 ], 00:30:01.927 "product_name": "Raid Volume", 00:30:01.927 "block_size": 4128, 00:30:01.927 "num_blocks": 7936, 00:30:01.927 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:01.927 "md_size": 32, 00:30:01.927 "md_interleave": true, 00:30:01.927 "dif_type": 0, 00:30:01.927 "assigned_rate_limits": { 00:30:01.927 "rw_ios_per_sec": 0, 00:30:01.927 "rw_mbytes_per_sec": 0, 00:30:01.927 "r_mbytes_per_sec": 0, 00:30:01.927 "w_mbytes_per_sec": 0 00:30:01.927 }, 00:30:01.927 "claimed": false, 00:30:01.927 "zoned": false, 00:30:01.927 "supported_io_types": { 00:30:01.927 "read": true, 00:30:01.927 "write": true, 00:30:01.927 "unmap": false, 00:30:01.927 "flush": false, 00:30:01.927 "reset": true, 00:30:01.927 "nvme_admin": false, 00:30:01.927 "nvme_io": false, 00:30:01.927 "nvme_io_md": false, 00:30:01.927 "write_zeroes": true, 00:30:01.927 "zcopy": false, 00:30:01.927 "get_zone_info": false, 00:30:01.927 "zone_management": false, 00:30:01.927 "zone_append": false, 00:30:01.927 "compare": false, 00:30:01.927 "compare_and_write": false, 00:30:01.927 "abort": false, 00:30:01.927 "seek_hole": false, 00:30:01.927 "seek_data": false, 00:30:01.927 "copy": false, 00:30:01.927 "nvme_iov_md": false 00:30:01.927 }, 00:30:01.927 "memory_domains": [ 00:30:01.927 { 00:30:01.927 "dma_device_id": "system", 00:30:01.927 "dma_device_type": 1 00:30:01.927 }, 00:30:01.927 { 00:30:01.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:01.927 "dma_device_type": 2 00:30:01.927 }, 00:30:01.927 { 00:30:01.927 "dma_device_id": "system", 00:30:01.927 "dma_device_type": 1 00:30:01.927 }, 00:30:01.927 { 00:30:01.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:01.927 "dma_device_type": 2 00:30:01.927 } 00:30:01.927 ], 00:30:01.927 "driver_specific": { 00:30:01.927 "raid": { 00:30:01.927 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:01.927 "strip_size_kb": 0, 00:30:01.927 "state": "online", 00:30:01.927 "raid_level": "raid1", 00:30:01.927 "superblock": true, 00:30:01.927 "num_base_bdevs": 2, 00:30:01.927 "num_base_bdevs_discovered": 2, 00:30:01.927 "num_base_bdevs_operational": 2, 00:30:01.927 "base_bdevs_list": [ 00:30:01.927 { 00:30:01.927 "name": "pt1", 00:30:01.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:01.927 "is_configured": true, 00:30:01.927 "data_offset": 256, 00:30:01.927 "data_size": 7936 00:30:01.927 }, 00:30:01.927 { 00:30:01.927 "name": "pt2", 00:30:01.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:01.927 "is_configured": true, 00:30:01.927 "data_offset": 256, 00:30:01.927 "data_size": 7936 00:30:01.927 } 00:30:01.927 ] 00:30:01.927 } 00:30:01.927 } 00:30:01.927 }' 00:30:01.927 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:02.185 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:02.185 pt2' 00:30:02.185 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:02.185 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:02.185 10:40:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:02.185 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:02.185 "name": "pt1", 00:30:02.185 "aliases": [ 00:30:02.185 "00000000-0000-0000-0000-000000000001" 00:30:02.185 ], 00:30:02.185 "product_name": "passthru", 00:30:02.185 "block_size": 4128, 00:30:02.185 "num_blocks": 8192, 00:30:02.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:02.185 "md_size": 32, 00:30:02.185 "md_interleave": true, 00:30:02.185 "dif_type": 0, 00:30:02.185 "assigned_rate_limits": { 00:30:02.185 "rw_ios_per_sec": 0, 00:30:02.185 "rw_mbytes_per_sec": 0, 00:30:02.185 "r_mbytes_per_sec": 0, 00:30:02.185 "w_mbytes_per_sec": 0 00:30:02.185 }, 00:30:02.185 "claimed": true, 00:30:02.185 "claim_type": "exclusive_write", 00:30:02.185 "zoned": false, 00:30:02.185 "supported_io_types": { 00:30:02.185 "read": true, 00:30:02.185 "write": true, 00:30:02.185 "unmap": true, 00:30:02.185 "flush": true, 00:30:02.185 "reset": true, 00:30:02.185 "nvme_admin": false, 00:30:02.185 "nvme_io": false, 00:30:02.185 "nvme_io_md": false, 00:30:02.185 "write_zeroes": true, 00:30:02.185 "zcopy": true, 00:30:02.185 "get_zone_info": false, 00:30:02.185 "zone_management": false, 00:30:02.185 "zone_append": false, 00:30:02.185 "compare": false, 00:30:02.185 "compare_and_write": false, 00:30:02.185 "abort": true, 00:30:02.185 "seek_hole": false, 00:30:02.185 "seek_data": false, 00:30:02.185 "copy": true, 00:30:02.185 "nvme_iov_md": false 00:30:02.185 }, 00:30:02.185 "memory_domains": [ 00:30:02.185 { 00:30:02.185 "dma_device_id": "system", 00:30:02.185 "dma_device_type": 1 00:30:02.185 }, 00:30:02.185 { 00:30:02.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.185 "dma_device_type": 2 00:30:02.185 } 00:30:02.185 ], 00:30:02.185 "driver_specific": { 00:30:02.185 "passthru": { 00:30:02.185 "name": "pt1", 00:30:02.185 "base_bdev_name": "malloc1" 00:30:02.185 } 00:30:02.185 } 00:30:02.185 }' 00:30:02.185 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:30:02.443 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:02.701 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:02.701 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:30:02.701 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:02.701 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:02.701 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:02.959 "name": "pt2", 00:30:02.959 "aliases": [ 00:30:02.959 "00000000-0000-0000-0000-000000000002" 00:30:02.959 ], 00:30:02.959 "product_name": "passthru", 00:30:02.959 "block_size": 4128, 00:30:02.959 "num_blocks": 8192, 00:30:02.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:02.959 "md_size": 32, 00:30:02.959 "md_interleave": true, 00:30:02.959 "dif_type": 0, 00:30:02.959 "assigned_rate_limits": { 00:30:02.959 "rw_ios_per_sec": 0, 00:30:02.959 "rw_mbytes_per_sec": 0, 00:30:02.959 "r_mbytes_per_sec": 0, 00:30:02.959 "w_mbytes_per_sec": 0 00:30:02.959 }, 00:30:02.959 "claimed": true, 00:30:02.959 "claim_type": "exclusive_write", 00:30:02.959 "zoned": false, 00:30:02.959 "supported_io_types": { 00:30:02.959 "read": true, 00:30:02.959 "write": true, 00:30:02.959 "unmap": true, 00:30:02.959 "flush": true, 00:30:02.959 "reset": true, 00:30:02.959 "nvme_admin": false, 00:30:02.959 "nvme_io": false, 00:30:02.959 "nvme_io_md": false, 00:30:02.959 "write_zeroes": true, 00:30:02.959 "zcopy": true, 00:30:02.959 "get_zone_info": false, 00:30:02.959 "zone_management": false, 00:30:02.959 "zone_append": false, 00:30:02.959 "compare": false, 00:30:02.959 "compare_and_write": false, 00:30:02.959 "abort": true, 00:30:02.959 "seek_hole": false, 00:30:02.959 "seek_data": false, 00:30:02.959 "copy": true, 00:30:02.959 "nvme_iov_md": false 00:30:02.959 }, 00:30:02.959 "memory_domains": [ 00:30:02.959 { 00:30:02.959 "dma_device_id": "system", 00:30:02.959 "dma_device_type": 1 00:30:02.959 }, 00:30:02.959 { 00:30:02.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:02.959 "dma_device_type": 2 00:30:02.959 } 00:30:02.959 ], 00:30:02.959 "driver_specific": { 00:30:02.959 "passthru": { 00:30:02.959 "name": "pt2", 00:30:02.959 "base_bdev_name": "malloc2" 00:30:02.959 } 00:30:02.959 } 00:30:02.959 }' 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:02.959 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:03.218 10:40:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:30:03.477 [2024-07-26 10:40:16.178587] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:03.477 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=76148d02-28f9-43bf-84c9-c02623991eef 00:30:03.477 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 76148d02-28f9-43bf-84c9-c02623991eef ']' 00:30:03.477 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:03.736 [2024-07-26 10:40:16.402929] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:03.736 [2024-07-26 10:40:16.402945] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:03.736 [2024-07-26 10:40:16.402994] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:03.736 [2024-07-26 10:40:16.403044] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:03.736 [2024-07-26 10:40:16.403054] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201bb40 name raid_bdev1, state offline 00:30:03.736 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.736 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:30:04.031 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:30:04.031 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:30:04.031 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:04.032 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:04.032 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:04.032 10:40:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:04.322 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:04.322 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.580 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.581 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:04.581 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.581 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:04.581 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:04.839 [2024-07-26 10:40:17.537875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:04.840 [2024-07-26 10:40:17.539102] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:04.840 [2024-07-26 10:40:17.539162] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:04.840 [2024-07-26 10:40:17.539199] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:04.840 [2024-07-26 10:40:17.539216] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:04.840 [2024-07-26 10:40:17.539225] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2030170 name raid_bdev1, state configuring 00:30:04.840 request: 00:30:04.840 { 00:30:04.840 "name": "raid_bdev1", 00:30:04.840 "raid_level": "raid1", 00:30:04.840 "base_bdevs": [ 00:30:04.840 "malloc1", 00:30:04.840 "malloc2" 00:30:04.840 ], 00:30:04.840 "superblock": false, 00:30:04.840 "method": "bdev_raid_create", 00:30:04.840 "req_id": 1 00:30:04.840 } 00:30:04.840 Got JSON-RPC error response 00:30:04.840 response: 00:30:04.840 { 00:30:04.840 "code": -17, 00:30:04.840 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:04.840 } 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.840 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:30:05.099 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:30:05.099 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:30:05.099 10:40:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:05.099 [2024-07-26 10:40:17.995025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:05.099 [2024-07-26 10:40:17.995074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:05.099 [2024-07-26 10:40:17.995091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x201edf0 00:30:05.099 [2024-07-26 10:40:17.995102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:05.099 [2024-07-26 10:40:17.996428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:05.099 [2024-07-26 10:40:17.996454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:05.099 [2024-07-26 10:40:17.996497] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:05.099 [2024-07-26 10:40:17.996519] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:05.099 pt1 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:05.358 "name": "raid_bdev1", 00:30:05.358 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:05.358 "strip_size_kb": 0, 00:30:05.358 "state": "configuring", 00:30:05.358 "raid_level": "raid1", 00:30:05.358 "superblock": true, 00:30:05.358 "num_base_bdevs": 2, 00:30:05.358 "num_base_bdevs_discovered": 1, 00:30:05.358 "num_base_bdevs_operational": 2, 00:30:05.358 "base_bdevs_list": [ 00:30:05.358 { 00:30:05.358 "name": "pt1", 00:30:05.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:05.358 "is_configured": true, 00:30:05.358 "data_offset": 256, 00:30:05.358 "data_size": 7936 00:30:05.358 }, 00:30:05.358 { 00:30:05.358 "name": null, 00:30:05.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:05.358 "is_configured": false, 00:30:05.358 "data_offset": 256, 00:30:05.358 "data_size": 7936 00:30:05.358 } 00:30:05.358 ] 00:30:05.358 }' 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:05.358 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:05.925 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:30:05.925 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:30:05.925 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:05.925 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:06.184 [2024-07-26 10:40:18.977626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:06.184 [2024-07-26 10:40:18.977672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:06.184 [2024-07-26 10:40:18.977689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed8080 00:30:06.184 [2024-07-26 10:40:18.977701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:06.184 [2024-07-26 10:40:18.977856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:06.184 [2024-07-26 10:40:18.977870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:06.184 [2024-07-26 10:40:18.977910] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:06.184 [2024-07-26 10:40:18.977926] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:06.184 [2024-07-26 10:40:18.977999] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x201d9f0 00:30:06.184 [2024-07-26 10:40:18.978008] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:06.184 [2024-07-26 10:40:18.978055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x201fa10 00:30:06.184 [2024-07-26 10:40:18.978120] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201d9f0 00:30:06.184 [2024-07-26 10:40:18.978129] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x201d9f0 00:30:06.184 [2024-07-26 10:40:18.978193] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.184 pt2 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.184 10:40:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.184 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.443 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:06.443 "name": "raid_bdev1", 00:30:06.443 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:06.443 "strip_size_kb": 0, 00:30:06.443 "state": "online", 00:30:06.443 "raid_level": "raid1", 00:30:06.443 "superblock": true, 00:30:06.443 "num_base_bdevs": 2, 00:30:06.443 "num_base_bdevs_discovered": 2, 00:30:06.443 "num_base_bdevs_operational": 2, 00:30:06.443 "base_bdevs_list": [ 00:30:06.443 { 00:30:06.443 "name": "pt1", 00:30:06.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:06.443 "is_configured": true, 00:30:06.443 "data_offset": 256, 00:30:06.443 "data_size": 7936 00:30:06.443 }, 00:30:06.443 { 00:30:06.443 "name": "pt2", 00:30:06.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:06.443 "is_configured": true, 00:30:06.443 "data_offset": 256, 00:30:06.443 "data_size": 7936 00:30:06.443 } 00:30:06.443 ] 00:30:06.443 }' 00:30:06.443 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:06.443 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:07.009 10:40:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:07.268 [2024-07-26 10:40:19.988507] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:07.268 "name": "raid_bdev1", 00:30:07.268 "aliases": [ 00:30:07.268 "76148d02-28f9-43bf-84c9-c02623991eef" 00:30:07.268 ], 00:30:07.268 "product_name": "Raid Volume", 00:30:07.268 "block_size": 4128, 00:30:07.268 "num_blocks": 7936, 00:30:07.268 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:07.268 "md_size": 32, 00:30:07.268 "md_interleave": true, 00:30:07.268 "dif_type": 0, 00:30:07.268 "assigned_rate_limits": { 00:30:07.268 "rw_ios_per_sec": 0, 00:30:07.268 "rw_mbytes_per_sec": 0, 00:30:07.268 "r_mbytes_per_sec": 0, 00:30:07.268 "w_mbytes_per_sec": 0 00:30:07.268 }, 00:30:07.268 "claimed": false, 00:30:07.268 "zoned": false, 00:30:07.268 "supported_io_types": { 00:30:07.268 "read": true, 00:30:07.268 "write": true, 00:30:07.268 "unmap": false, 00:30:07.268 "flush": false, 00:30:07.268 "reset": true, 00:30:07.268 "nvme_admin": false, 00:30:07.268 "nvme_io": false, 00:30:07.268 "nvme_io_md": false, 00:30:07.268 "write_zeroes": true, 00:30:07.268 "zcopy": false, 00:30:07.268 "get_zone_info": false, 00:30:07.268 "zone_management": false, 00:30:07.268 "zone_append": false, 00:30:07.268 "compare": false, 00:30:07.268 "compare_and_write": false, 00:30:07.268 "abort": false, 00:30:07.268 "seek_hole": false, 00:30:07.268 "seek_data": false, 00:30:07.268 "copy": false, 00:30:07.268 "nvme_iov_md": false 00:30:07.268 }, 00:30:07.268 "memory_domains": [ 00:30:07.268 { 00:30:07.268 "dma_device_id": "system", 00:30:07.268 "dma_device_type": 1 00:30:07.268 }, 00:30:07.268 { 00:30:07.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.268 "dma_device_type": 2 00:30:07.268 }, 00:30:07.268 { 00:30:07.268 "dma_device_id": "system", 00:30:07.268 "dma_device_type": 1 00:30:07.268 }, 00:30:07.268 { 00:30:07.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.268 "dma_device_type": 2 00:30:07.268 } 00:30:07.268 ], 00:30:07.268 "driver_specific": { 00:30:07.268 "raid": { 00:30:07.268 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:07.268 "strip_size_kb": 0, 00:30:07.268 "state": "online", 00:30:07.268 "raid_level": "raid1", 00:30:07.268 "superblock": true, 00:30:07.268 "num_base_bdevs": 2, 00:30:07.268 "num_base_bdevs_discovered": 2, 00:30:07.268 "num_base_bdevs_operational": 2, 00:30:07.268 "base_bdevs_list": [ 00:30:07.268 { 00:30:07.268 "name": "pt1", 00:30:07.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:07.268 "is_configured": true, 00:30:07.268 "data_offset": 256, 00:30:07.268 "data_size": 7936 00:30:07.268 }, 00:30:07.268 { 00:30:07.268 "name": "pt2", 00:30:07.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:07.268 "is_configured": true, 00:30:07.268 "data_offset": 256, 00:30:07.268 "data_size": 7936 00:30:07.268 } 00:30:07.268 ] 00:30:07.268 } 00:30:07.268 } 00:30:07.268 }' 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:07.268 pt2' 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:07.268 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:07.525 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:07.525 "name": "pt1", 00:30:07.525 "aliases": [ 00:30:07.525 "00000000-0000-0000-0000-000000000001" 00:30:07.525 ], 00:30:07.525 "product_name": "passthru", 00:30:07.525 "block_size": 4128, 00:30:07.525 "num_blocks": 8192, 00:30:07.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:07.525 "md_size": 32, 00:30:07.525 "md_interleave": true, 00:30:07.525 "dif_type": 0, 00:30:07.525 "assigned_rate_limits": { 00:30:07.525 "rw_ios_per_sec": 0, 00:30:07.525 "rw_mbytes_per_sec": 0, 00:30:07.525 "r_mbytes_per_sec": 0, 00:30:07.525 "w_mbytes_per_sec": 0 00:30:07.525 }, 00:30:07.525 "claimed": true, 00:30:07.525 "claim_type": "exclusive_write", 00:30:07.525 "zoned": false, 00:30:07.525 "supported_io_types": { 00:30:07.525 "read": true, 00:30:07.525 "write": true, 00:30:07.525 "unmap": true, 00:30:07.525 "flush": true, 00:30:07.525 "reset": true, 00:30:07.525 "nvme_admin": false, 00:30:07.525 "nvme_io": false, 00:30:07.525 "nvme_io_md": false, 00:30:07.525 "write_zeroes": true, 00:30:07.525 "zcopy": true, 00:30:07.525 "get_zone_info": false, 00:30:07.525 "zone_management": false, 00:30:07.525 "zone_append": false, 00:30:07.525 "compare": false, 00:30:07.525 "compare_and_write": false, 00:30:07.525 "abort": true, 00:30:07.525 "seek_hole": false, 00:30:07.525 "seek_data": false, 00:30:07.525 "copy": true, 00:30:07.525 "nvme_iov_md": false 00:30:07.525 }, 00:30:07.525 "memory_domains": [ 00:30:07.525 { 00:30:07.525 "dma_device_id": "system", 00:30:07.525 "dma_device_type": 1 00:30:07.525 }, 00:30:07.525 { 00:30:07.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.525 "dma_device_type": 2 00:30:07.525 } 00:30:07.525 ], 00:30:07.525 "driver_specific": { 00:30:07.525 "passthru": { 00:30:07.525 "name": "pt1", 00:30:07.525 "base_bdev_name": "malloc1" 00:30:07.525 } 00:30:07.525 } 00:30:07.525 }' 00:30:07.525 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:07.525 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:07.525 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:30:07.525 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:07.782 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:08.039 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:08.039 "name": "pt2", 00:30:08.039 "aliases": [ 00:30:08.039 "00000000-0000-0000-0000-000000000002" 00:30:08.039 ], 00:30:08.039 "product_name": "passthru", 00:30:08.039 "block_size": 4128, 00:30:08.039 "num_blocks": 8192, 00:30:08.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:08.039 "md_size": 32, 00:30:08.039 "md_interleave": true, 00:30:08.039 "dif_type": 0, 00:30:08.039 "assigned_rate_limits": { 00:30:08.039 "rw_ios_per_sec": 0, 00:30:08.039 "rw_mbytes_per_sec": 0, 00:30:08.039 "r_mbytes_per_sec": 0, 00:30:08.039 "w_mbytes_per_sec": 0 00:30:08.039 }, 00:30:08.039 "claimed": true, 00:30:08.039 "claim_type": "exclusive_write", 00:30:08.039 "zoned": false, 00:30:08.039 "supported_io_types": { 00:30:08.039 "read": true, 00:30:08.039 "write": true, 00:30:08.039 "unmap": true, 00:30:08.039 "flush": true, 00:30:08.039 "reset": true, 00:30:08.039 "nvme_admin": false, 00:30:08.039 "nvme_io": false, 00:30:08.039 "nvme_io_md": false, 00:30:08.039 "write_zeroes": true, 00:30:08.039 "zcopy": true, 00:30:08.039 "get_zone_info": false, 00:30:08.039 "zone_management": false, 00:30:08.039 "zone_append": false, 00:30:08.039 "compare": false, 00:30:08.039 "compare_and_write": false, 00:30:08.039 "abort": true, 00:30:08.039 "seek_hole": false, 00:30:08.039 "seek_data": false, 00:30:08.039 "copy": true, 00:30:08.039 "nvme_iov_md": false 00:30:08.039 }, 00:30:08.039 "memory_domains": [ 00:30:08.039 { 00:30:08.039 "dma_device_id": "system", 00:30:08.040 "dma_device_type": 1 00:30:08.040 }, 00:30:08.040 { 00:30:08.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.040 "dma_device_type": 2 00:30:08.040 } 00:30:08.040 ], 00:30:08.040 "driver_specific": { 00:30:08.040 "passthru": { 00:30:08.040 "name": "pt2", 00:30:08.040 "base_bdev_name": "malloc2" 00:30:08.040 } 00:30:08.040 } 00:30:08.040 }' 00:30:08.040 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:08.040 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:08.297 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:30:08.297 10:40:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:08.297 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:30:08.298 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:30:08.556 [2024-07-26 10:40:21.404197] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 76148d02-28f9-43bf-84c9-c02623991eef '!=' 76148d02-28f9-43bf-84c9-c02623991eef ']' 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:30:08.556 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:08.815 [2024-07-26 10:40:21.636603] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.815 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.074 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.074 "name": "raid_bdev1", 00:30:09.074 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:09.074 "strip_size_kb": 0, 00:30:09.074 "state": "online", 00:30:09.074 "raid_level": "raid1", 00:30:09.074 "superblock": true, 00:30:09.074 "num_base_bdevs": 2, 00:30:09.074 "num_base_bdevs_discovered": 1, 00:30:09.074 "num_base_bdevs_operational": 1, 00:30:09.074 "base_bdevs_list": [ 00:30:09.074 { 00:30:09.074 "name": null, 00:30:09.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.074 "is_configured": false, 00:30:09.074 "data_offset": 256, 00:30:09.074 "data_size": 7936 00:30:09.074 }, 00:30:09.074 { 00:30:09.074 "name": "pt2", 00:30:09.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:09.074 "is_configured": true, 00:30:09.074 "data_offset": 256, 00:30:09.074 "data_size": 7936 00:30:09.074 } 00:30:09.074 ] 00:30:09.074 }' 00:30:09.074 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.074 10:40:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:10.011 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:10.011 [2024-07-26 10:40:22.743487] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:10.011 [2024-07-26 10:40:22.743509] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:10.011 [2024-07-26 10:40:22.743555] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.011 [2024-07-26 10:40:22.743594] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:10.011 [2024-07-26 10:40:22.743604] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201d9f0 name raid_bdev1, state offline 00:30:10.011 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.011 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:30:10.269 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:30:10.269 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:30:10.269 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:30:10.269 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:30:10.269 10:40:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:30:10.529 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:10.529 [2024-07-26 10:40:23.413226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:10.529 [2024-07-26 10:40:23.413266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:10.529 [2024-07-26 10:40:23.413282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed82b0 00:30:10.529 [2024-07-26 10:40:23.413293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:10.529 [2024-07-26 10:40:23.414633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:10.529 [2024-07-26 10:40:23.414658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:10.529 [2024-07-26 10:40:23.414700] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:10.529 [2024-07-26 10:40:23.414722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:10.529 [2024-07-26 10:40:23.414782] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x201f520 00:30:10.529 [2024-07-26 10:40:23.414791] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:10.529 [2024-07-26 10:40:23.414841] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5a2a0 00:30:10.529 [2024-07-26 10:40:23.414911] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201f520 00:30:10.529 [2024-07-26 10:40:23.414920] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x201f520 00:30:10.529 [2024-07-26 10:40:23.414971] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.529 pt2 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.787 "name": "raid_bdev1", 00:30:10.787 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:10.787 "strip_size_kb": 0, 00:30:10.787 "state": "online", 00:30:10.787 "raid_level": "raid1", 00:30:10.787 "superblock": true, 00:30:10.787 "num_base_bdevs": 2, 00:30:10.787 "num_base_bdevs_discovered": 1, 00:30:10.787 "num_base_bdevs_operational": 1, 00:30:10.787 "base_bdevs_list": [ 00:30:10.787 { 00:30:10.787 "name": null, 00:30:10.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.787 "is_configured": false, 00:30:10.787 "data_offset": 256, 00:30:10.787 "data_size": 7936 00:30:10.787 }, 00:30:10.787 { 00:30:10.787 "name": "pt2", 00:30:10.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:10.787 "is_configured": true, 00:30:10.787 "data_offset": 256, 00:30:10.787 "data_size": 7936 00:30:10.787 } 00:30:10.787 ] 00:30:10.787 }' 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.787 10:40:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:11.352 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:11.609 [2024-07-26 10:40:24.407822] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:11.610 [2024-07-26 10:40:24.407845] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:11.610 [2024-07-26 10:40:24.407890] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:11.610 [2024-07-26 10:40:24.407941] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:11.610 [2024-07-26 10:40:24.407952] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201f520 name raid_bdev1, state offline 00:30:11.610 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:30:11.610 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.867 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:30:11.867 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:30:11.867 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:30:11.867 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:12.125 [2024-07-26 10:40:24.869029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:12.125 [2024-07-26 10:40:24.869073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.125 [2024-07-26 10:40:24.869088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed7e10 00:30:12.125 [2024-07-26 10:40:24.869100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.125 [2024-07-26 10:40:24.870434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.125 [2024-07-26 10:40:24.870459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:12.125 [2024-07-26 10:40:24.870502] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:12.125 [2024-07-26 10:40:24.870523] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:12.125 [2024-07-26 10:40:24.870596] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:12.125 [2024-07-26 10:40:24.870608] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:12.125 [2024-07-26 10:40:24.870620] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201cc90 name raid_bdev1, state configuring 00:30:12.125 [2024-07-26 10:40:24.870640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:12.126 [2024-07-26 10:40:24.870684] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x201e7c0 00:30:12.126 [2024-07-26 10:40:24.870693] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:12.126 [2024-07-26 10:40:24.870744] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x201caf0 00:30:12.126 [2024-07-26 10:40:24.870807] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201e7c0 00:30:12.126 [2024-07-26 10:40:24.870816] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x201e7c0 00:30:12.126 [2024-07-26 10:40:24.870870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.126 pt1 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.126 10:40:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.384 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.384 "name": "raid_bdev1", 00:30:12.384 "uuid": "76148d02-28f9-43bf-84c9-c02623991eef", 00:30:12.384 "strip_size_kb": 0, 00:30:12.384 "state": "online", 00:30:12.384 "raid_level": "raid1", 00:30:12.384 "superblock": true, 00:30:12.384 "num_base_bdevs": 2, 00:30:12.384 "num_base_bdevs_discovered": 1, 00:30:12.384 "num_base_bdevs_operational": 1, 00:30:12.384 "base_bdevs_list": [ 00:30:12.384 { 00:30:12.384 "name": null, 00:30:12.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.384 "is_configured": false, 00:30:12.384 "data_offset": 256, 00:30:12.384 "data_size": 7936 00:30:12.384 }, 00:30:12.384 { 00:30:12.384 "name": "pt2", 00:30:12.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:12.384 "is_configured": true, 00:30:12.384 "data_offset": 256, 00:30:12.384 "data_size": 7936 00:30:12.384 } 00:30:12.384 ] 00:30:12.384 }' 00:30:12.384 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.384 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:12.949 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:12.949 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:30:13.207 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:30:13.207 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:13.207 10:40:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:30:13.466 [2024-07-26 10:40:26.132544] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 76148d02-28f9-43bf-84c9-c02623991eef '!=' 76148d02-28f9-43bf-84c9-c02623991eef ']' 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 3533703 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3533703 ']' 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3533703 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3533703 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3533703' 00:30:13.466 killing process with pid 3533703 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 3533703 00:30:13.466 [2024-07-26 10:40:26.211566] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:13.466 [2024-07-26 10:40:26.211614] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:13.466 [2024-07-26 10:40:26.211655] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:13.466 [2024-07-26 10:40:26.211665] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201e7c0 name raid_bdev1, state offline 00:30:13.466 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 3533703 00:30:13.466 [2024-07-26 10:40:26.227860] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:13.725 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:30:13.725 00:30:13.725 real 0m14.718s 00:30:13.725 user 0m26.640s 00:30:13.725 sys 0m2.714s 00:30:13.725 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:13.725 10:40:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.725 ************************************ 00:30:13.725 END TEST raid_superblock_test_md_interleaved 00:30:13.725 ************************************ 00:30:13.725 10:40:26 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:30:13.725 10:40:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:13.725 10:40:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:13.725 10:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:13.725 ************************************ 00:30:13.725 START TEST raid_rebuild_test_sb_md_interleaved 00:30:13.725 ************************************ 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=3536413 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 3536413 /var/tmp/spdk-raid.sock 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3536413 ']' 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.725 10:40:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:13.725 [2024-07-26 10:40:26.559476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:13.725 [2024-07-26 10:40:26.559532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3536413 ] 00:30:13.725 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:13.725 Zero copy mechanism will not be used. 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:13.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.984 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:13.985 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:13.985 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:13.985 [2024-07-26 10:40:26.694622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.985 [2024-07-26 10:40:26.738805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.985 [2024-07-26 10:40:26.798211] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:13.985 [2024-07-26 10:40:26.798243] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:14.919 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:14.919 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:30:14.919 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:14.919 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:30:14.919 BaseBdev1_malloc 00:30:14.919 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:15.177 [2024-07-26 10:40:27.904392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:15.177 [2024-07-26 10:40:27.904437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.177 [2024-07-26 10:40:27.904457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10245a0 00:30:15.177 [2024-07-26 10:40:27.904469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.177 [2024-07-26 10:40:27.905746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.177 [2024-07-26 10:40:27.905772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:15.177 BaseBdev1 00:30:15.177 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:15.177 10:40:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:30:15.436 BaseBdev2_malloc 00:30:15.436 10:40:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:15.694 [2024-07-26 10:40:28.354307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:15.694 [2024-07-26 10:40:28.354348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.694 [2024-07-26 10:40:28.354366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfdf330 00:30:15.694 [2024-07-26 10:40:28.354377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.694 [2024-07-26 10:40:28.355656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.694 [2024-07-26 10:40:28.355682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:15.694 BaseBdev2 00:30:15.694 10:40:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:30:15.694 spare_malloc 00:30:15.953 10:40:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:15.953 spare_delay 00:30:15.953 10:40:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:16.212 [2024-07-26 10:40:29.036759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:16.212 [2024-07-26 10:40:29.036801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.212 [2024-07-26 10:40:29.036821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcc920 00:30:16.212 [2024-07-26 10:40:29.036832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.212 [2024-07-26 10:40:29.038030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.212 [2024-07-26 10:40:29.038055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:16.212 spare 00:30:16.212 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:16.470 [2024-07-26 10:40:29.261371] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:16.470 [2024-07-26 10:40:29.262510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.470 [2024-07-26 10:40:29.262648] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xfced70 00:30:16.470 [2024-07-26 10:40:29.262659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:16.470 [2024-07-26 10:40:29.262728] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfcec80 00:30:16.470 [2024-07-26 10:40:29.262797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfced70 00:30:16.471 [2024-07-26 10:40:29.262806] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfced70 00:30:16.471 [2024-07-26 10:40:29.262870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.471 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.729 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.729 "name": "raid_bdev1", 00:30:16.729 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:16.729 "strip_size_kb": 0, 00:30:16.729 "state": "online", 00:30:16.729 "raid_level": "raid1", 00:30:16.729 "superblock": true, 00:30:16.729 "num_base_bdevs": 2, 00:30:16.729 "num_base_bdevs_discovered": 2, 00:30:16.729 "num_base_bdevs_operational": 2, 00:30:16.729 "base_bdevs_list": [ 00:30:16.729 { 00:30:16.729 "name": "BaseBdev1", 00:30:16.729 "uuid": "575dce97-46c2-56a9-904f-825fb1756fd3", 00:30:16.729 "is_configured": true, 00:30:16.729 "data_offset": 256, 00:30:16.729 "data_size": 7936 00:30:16.729 }, 00:30:16.729 { 00:30:16.729 "name": "BaseBdev2", 00:30:16.729 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:16.729 "is_configured": true, 00:30:16.729 "data_offset": 256, 00:30:16.729 "data_size": 7936 00:30:16.729 } 00:30:16.729 ] 00:30:16.729 }' 00:30:16.729 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.729 10:40:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:17.296 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:17.296 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:17.554 [2024-07-26 10:40:30.288295] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:17.554 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:30:17.554 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.554 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:17.812 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:30:17.812 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:30:17.812 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:30:17.812 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:18.071 [2024-07-26 10:40:30.749284] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.071 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.335 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.335 "name": "raid_bdev1", 00:30:18.335 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:18.335 "strip_size_kb": 0, 00:30:18.335 "state": "online", 00:30:18.335 "raid_level": "raid1", 00:30:18.335 "superblock": true, 00:30:18.335 "num_base_bdevs": 2, 00:30:18.335 "num_base_bdevs_discovered": 1, 00:30:18.335 "num_base_bdevs_operational": 1, 00:30:18.335 "base_bdevs_list": [ 00:30:18.336 { 00:30:18.336 "name": null, 00:30:18.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.336 "is_configured": false, 00:30:18.336 "data_offset": 256, 00:30:18.336 "data_size": 7936 00:30:18.336 }, 00:30:18.336 { 00:30:18.336 "name": "BaseBdev2", 00:30:18.336 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:18.336 "is_configured": true, 00:30:18.336 "data_offset": 256, 00:30:18.336 "data_size": 7936 00:30:18.336 } 00:30:18.336 ] 00:30:18.336 }' 00:30:18.336 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.336 10:40:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:18.939 10:40:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:18.939 [2024-07-26 10:40:31.780049] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:18.939 [2024-07-26 10:40:31.783400] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfcec80 00:30:18.939 [2024-07-26 10:40:31.785368] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:18.939 10:40:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:20.312 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.312 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.312 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.313 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.313 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.313 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.313 10:40:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.313 "name": "raid_bdev1", 00:30:20.313 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:20.313 "strip_size_kb": 0, 00:30:20.313 "state": "online", 00:30:20.313 "raid_level": "raid1", 00:30:20.313 "superblock": true, 00:30:20.313 "num_base_bdevs": 2, 00:30:20.313 "num_base_bdevs_discovered": 2, 00:30:20.313 "num_base_bdevs_operational": 2, 00:30:20.313 "process": { 00:30:20.313 "type": "rebuild", 00:30:20.313 "target": "spare", 00:30:20.313 "progress": { 00:30:20.313 "blocks": 3072, 00:30:20.313 "percent": 38 00:30:20.313 } 00:30:20.313 }, 00:30:20.313 "base_bdevs_list": [ 00:30:20.313 { 00:30:20.313 "name": "spare", 00:30:20.313 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:20.313 "is_configured": true, 00:30:20.313 "data_offset": 256, 00:30:20.313 "data_size": 7936 00:30:20.313 }, 00:30:20.313 { 00:30:20.313 "name": "BaseBdev2", 00:30:20.313 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:20.313 "is_configured": true, 00:30:20.313 "data_offset": 256, 00:30:20.313 "data_size": 7936 00:30:20.313 } 00:30:20.313 ] 00:30:20.313 }' 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.313 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:20.570 [2024-07-26 10:40:33.326355] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:20.570 [2024-07-26 10:40:33.397112] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:20.570 [2024-07-26 10:40:33.397164] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.570 [2024-07-26 10:40:33.397179] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:20.570 [2024-07-26 10:40:33.397187] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.570 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.827 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:20.827 "name": "raid_bdev1", 00:30:20.827 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:20.827 "strip_size_kb": 0, 00:30:20.827 "state": "online", 00:30:20.827 "raid_level": "raid1", 00:30:20.827 "superblock": true, 00:30:20.827 "num_base_bdevs": 2, 00:30:20.827 "num_base_bdevs_discovered": 1, 00:30:20.827 "num_base_bdevs_operational": 1, 00:30:20.827 "base_bdevs_list": [ 00:30:20.827 { 00:30:20.827 "name": null, 00:30:20.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.827 "is_configured": false, 00:30:20.827 "data_offset": 256, 00:30:20.827 "data_size": 7936 00:30:20.827 }, 00:30:20.827 { 00:30:20.827 "name": "BaseBdev2", 00:30:20.827 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:20.827 "is_configured": true, 00:30:20.827 "data_offset": 256, 00:30:20.827 "data_size": 7936 00:30:20.827 } 00:30:20.827 ] 00:30:20.827 }' 00:30:20.827 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:20.827 10:40:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.391 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:21.648 "name": "raid_bdev1", 00:30:21.648 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:21.648 "strip_size_kb": 0, 00:30:21.648 "state": "online", 00:30:21.648 "raid_level": "raid1", 00:30:21.648 "superblock": true, 00:30:21.648 "num_base_bdevs": 2, 00:30:21.648 "num_base_bdevs_discovered": 1, 00:30:21.648 "num_base_bdevs_operational": 1, 00:30:21.648 "base_bdevs_list": [ 00:30:21.648 { 00:30:21.648 "name": null, 00:30:21.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.648 "is_configured": false, 00:30:21.648 "data_offset": 256, 00:30:21.648 "data_size": 7936 00:30:21.648 }, 00:30:21.648 { 00:30:21.648 "name": "BaseBdev2", 00:30:21.648 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:21.648 "is_configured": true, 00:30:21.648 "data_offset": 256, 00:30:21.648 "data_size": 7936 00:30:21.648 } 00:30:21.648 ] 00:30:21.648 }' 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:21.648 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:21.905 [2024-07-26 10:40:34.740253] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:21.905 [2024-07-26 10:40:34.743606] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1023750 00:30:21.905 [2024-07-26 10:40:34.744941] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:21.905 10:40:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.278 10:40:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.278 "name": "raid_bdev1", 00:30:23.278 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:23.278 "strip_size_kb": 0, 00:30:23.278 "state": "online", 00:30:23.278 "raid_level": "raid1", 00:30:23.278 "superblock": true, 00:30:23.278 "num_base_bdevs": 2, 00:30:23.278 "num_base_bdevs_discovered": 2, 00:30:23.278 "num_base_bdevs_operational": 2, 00:30:23.278 "process": { 00:30:23.278 "type": "rebuild", 00:30:23.278 "target": "spare", 00:30:23.278 "progress": { 00:30:23.278 "blocks": 3072, 00:30:23.278 "percent": 38 00:30:23.278 } 00:30:23.278 }, 00:30:23.278 "base_bdevs_list": [ 00:30:23.278 { 00:30:23.278 "name": "spare", 00:30:23.278 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:23.278 "is_configured": true, 00:30:23.278 "data_offset": 256, 00:30:23.278 "data_size": 7936 00:30:23.278 }, 00:30:23.278 { 00:30:23.278 "name": "BaseBdev2", 00:30:23.278 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:23.278 "is_configured": true, 00:30:23.278 "data_offset": 256, 00:30:23.278 "data_size": 7936 00:30:23.278 } 00:30:23.278 ] 00:30:23.278 }' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:23.278 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1080 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.278 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.536 "name": "raid_bdev1", 00:30:23.536 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:23.536 "strip_size_kb": 0, 00:30:23.536 "state": "online", 00:30:23.536 "raid_level": "raid1", 00:30:23.536 "superblock": true, 00:30:23.536 "num_base_bdevs": 2, 00:30:23.536 "num_base_bdevs_discovered": 2, 00:30:23.536 "num_base_bdevs_operational": 2, 00:30:23.536 "process": { 00:30:23.536 "type": "rebuild", 00:30:23.536 "target": "spare", 00:30:23.536 "progress": { 00:30:23.536 "blocks": 3840, 00:30:23.536 "percent": 48 00:30:23.536 } 00:30:23.536 }, 00:30:23.536 "base_bdevs_list": [ 00:30:23.536 { 00:30:23.536 "name": "spare", 00:30:23.536 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:23.536 "is_configured": true, 00:30:23.536 "data_offset": 256, 00:30:23.536 "data_size": 7936 00:30:23.536 }, 00:30:23.536 { 00:30:23.536 "name": "BaseBdev2", 00:30:23.536 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:23.536 "is_configured": true, 00:30:23.536 "data_offset": 256, 00:30:23.536 "data_size": 7936 00:30:23.536 } 00:30:23.536 ] 00:30:23.536 }' 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.536 10:40:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.909 "name": "raid_bdev1", 00:30:24.909 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:24.909 "strip_size_kb": 0, 00:30:24.909 "state": "online", 00:30:24.909 "raid_level": "raid1", 00:30:24.909 "superblock": true, 00:30:24.909 "num_base_bdevs": 2, 00:30:24.909 "num_base_bdevs_discovered": 2, 00:30:24.909 "num_base_bdevs_operational": 2, 00:30:24.909 "process": { 00:30:24.909 "type": "rebuild", 00:30:24.909 "target": "spare", 00:30:24.909 "progress": { 00:30:24.909 "blocks": 7168, 00:30:24.909 "percent": 90 00:30:24.909 } 00:30:24.909 }, 00:30:24.909 "base_bdevs_list": [ 00:30:24.909 { 00:30:24.909 "name": "spare", 00:30:24.909 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:24.909 "is_configured": true, 00:30:24.909 "data_offset": 256, 00:30:24.909 "data_size": 7936 00:30:24.909 }, 00:30:24.909 { 00:30:24.909 "name": "BaseBdev2", 00:30:24.909 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:24.909 "is_configured": true, 00:30:24.909 "data_offset": 256, 00:30:24.909 "data_size": 7936 00:30:24.909 } 00:30:24.909 ] 00:30:24.909 }' 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.909 10:40:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:25.166 [2024-07-26 10:40:37.867436] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:25.166 [2024-07-26 10:40:37.867488] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:25.166 [2024-07-26 10:40:37.867565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.119 "name": "raid_bdev1", 00:30:26.119 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:26.119 "strip_size_kb": 0, 00:30:26.119 "state": "online", 00:30:26.119 "raid_level": "raid1", 00:30:26.119 "superblock": true, 00:30:26.119 "num_base_bdevs": 2, 00:30:26.119 "num_base_bdevs_discovered": 2, 00:30:26.119 "num_base_bdevs_operational": 2, 00:30:26.119 "base_bdevs_list": [ 00:30:26.119 { 00:30:26.119 "name": "spare", 00:30:26.119 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:26.119 "is_configured": true, 00:30:26.119 "data_offset": 256, 00:30:26.119 "data_size": 7936 00:30:26.119 }, 00:30:26.119 { 00:30:26.119 "name": "BaseBdev2", 00:30:26.119 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:26.119 "is_configured": true, 00:30:26.119 "data_offset": 256, 00:30:26.119 "data_size": 7936 00:30:26.119 } 00:30:26.119 ] 00:30:26.119 }' 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:26.119 10:40:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:26.119 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.376 "name": "raid_bdev1", 00:30:26.376 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:26.376 "strip_size_kb": 0, 00:30:26.376 "state": "online", 00:30:26.376 "raid_level": "raid1", 00:30:26.376 "superblock": true, 00:30:26.376 "num_base_bdevs": 2, 00:30:26.376 "num_base_bdevs_discovered": 2, 00:30:26.376 "num_base_bdevs_operational": 2, 00:30:26.376 "base_bdevs_list": [ 00:30:26.376 { 00:30:26.376 "name": "spare", 00:30:26.376 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:26.376 "is_configured": true, 00:30:26.376 "data_offset": 256, 00:30:26.376 "data_size": 7936 00:30:26.376 }, 00:30:26.376 { 00:30:26.376 "name": "BaseBdev2", 00:30:26.376 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:26.376 "is_configured": true, 00:30:26.376 "data_offset": 256, 00:30:26.376 "data_size": 7936 00:30:26.376 } 00:30:26.376 ] 00:30:26.376 }' 00:30:26.376 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.634 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.892 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.892 "name": "raid_bdev1", 00:30:26.892 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:26.892 "strip_size_kb": 0, 00:30:26.892 "state": "online", 00:30:26.892 "raid_level": "raid1", 00:30:26.892 "superblock": true, 00:30:26.892 "num_base_bdevs": 2, 00:30:26.892 "num_base_bdevs_discovered": 2, 00:30:26.892 "num_base_bdevs_operational": 2, 00:30:26.892 "base_bdevs_list": [ 00:30:26.892 { 00:30:26.892 "name": "spare", 00:30:26.892 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:26.892 "is_configured": true, 00:30:26.892 "data_offset": 256, 00:30:26.892 "data_size": 7936 00:30:26.892 }, 00:30:26.892 { 00:30:26.892 "name": "BaseBdev2", 00:30:26.892 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:26.892 "is_configured": true, 00:30:26.892 "data_offset": 256, 00:30:26.892 "data_size": 7936 00:30:26.892 } 00:30:26.892 ] 00:30:26.892 }' 00:30:26.892 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.892 10:40:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:27.460 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:27.460 [2024-07-26 10:40:40.341814] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:27.460 [2024-07-26 10:40:40.341839] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:27.460 [2024-07-26 10:40:40.341892] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:27.460 [2024-07-26 10:40:40.341946] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:27.460 [2024-07-26 10:40:40.341962] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfced70 name raid_bdev1, state offline 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:27.719 10:40:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:28.284 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:28.542 [2024-07-26 10:40:41.324337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:28.542 [2024-07-26 10:40:41.324378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.542 [2024-07-26 10:40:41.324395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd2670 00:30:28.542 [2024-07-26 10:40:41.324407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.542 [2024-07-26 10:40:41.326050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.542 [2024-07-26 10:40:41.326080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:28.542 [2024-07-26 10:40:41.326131] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:28.542 [2024-07-26 10:40:41.326163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:28.542 [2024-07-26 10:40:41.326243] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:28.542 spare 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.542 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.542 [2024-07-26 10:40:41.426546] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xfcd1e0 00:30:28.542 [2024-07-26 10:40:41.426561] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:30:28.542 [2024-07-26 10:40:41.426635] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfcf620 00:30:28.542 [2024-07-26 10:40:41.426718] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfcd1e0 00:30:28.542 [2024-07-26 10:40:41.426733] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfcd1e0 00:30:28.542 [2024-07-26 10:40:41.426795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.801 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.801 "name": "raid_bdev1", 00:30:28.801 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:28.801 "strip_size_kb": 0, 00:30:28.801 "state": "online", 00:30:28.801 "raid_level": "raid1", 00:30:28.801 "superblock": true, 00:30:28.801 "num_base_bdevs": 2, 00:30:28.801 "num_base_bdevs_discovered": 2, 00:30:28.801 "num_base_bdevs_operational": 2, 00:30:28.801 "base_bdevs_list": [ 00:30:28.801 { 00:30:28.801 "name": "spare", 00:30:28.801 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:28.801 "is_configured": true, 00:30:28.801 "data_offset": 256, 00:30:28.801 "data_size": 7936 00:30:28.801 }, 00:30:28.801 { 00:30:28.801 "name": "BaseBdev2", 00:30:28.801 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:28.801 "is_configured": true, 00:30:28.801 "data_offset": 256, 00:30:28.801 "data_size": 7936 00:30:28.801 } 00:30:28.801 ] 00:30:28.801 }' 00:30:28.801 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.801 10:40:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:29.369 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.370 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.629 "name": "raid_bdev1", 00:30:29.629 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:29.629 "strip_size_kb": 0, 00:30:29.629 "state": "online", 00:30:29.629 "raid_level": "raid1", 00:30:29.629 "superblock": true, 00:30:29.629 "num_base_bdevs": 2, 00:30:29.629 "num_base_bdevs_discovered": 2, 00:30:29.629 "num_base_bdevs_operational": 2, 00:30:29.629 "base_bdevs_list": [ 00:30:29.629 { 00:30:29.629 "name": "spare", 00:30:29.629 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:29.629 "is_configured": true, 00:30:29.629 "data_offset": 256, 00:30:29.629 "data_size": 7936 00:30:29.629 }, 00:30:29.629 { 00:30:29.629 "name": "BaseBdev2", 00:30:29.629 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:29.629 "is_configured": true, 00:30:29.629 "data_offset": 256, 00:30:29.629 "data_size": 7936 00:30:29.629 } 00:30:29.629 ] 00:30:29.629 }' 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.629 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:29.889 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.889 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:30.148 [2024-07-26 10:40:42.888767] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.148 10:40:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.407 10:40:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:30.407 "name": "raid_bdev1", 00:30:30.407 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:30.407 "strip_size_kb": 0, 00:30:30.407 "state": "online", 00:30:30.407 "raid_level": "raid1", 00:30:30.407 "superblock": true, 00:30:30.407 "num_base_bdevs": 2, 00:30:30.407 "num_base_bdevs_discovered": 1, 00:30:30.407 "num_base_bdevs_operational": 1, 00:30:30.407 "base_bdevs_list": [ 00:30:30.407 { 00:30:30.407 "name": null, 00:30:30.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.407 "is_configured": false, 00:30:30.407 "data_offset": 256, 00:30:30.407 "data_size": 7936 00:30:30.407 }, 00:30:30.407 { 00:30:30.407 "name": "BaseBdev2", 00:30:30.407 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:30.407 "is_configured": true, 00:30:30.407 "data_offset": 256, 00:30:30.407 "data_size": 7936 00:30:30.407 } 00:30:30.407 ] 00:30:30.407 }' 00:30:30.407 10:40:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:30.407 10:40:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:30.975 10:40:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:31.233 [2024-07-26 10:40:43.919505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:31.233 [2024-07-26 10:40:43.919638] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:31.233 [2024-07-26 10:40:43.919653] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:31.233 [2024-07-26 10:40:43.919681] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:31.233 [2024-07-26 10:40:43.922897] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd13e0 00:30:31.233 [2024-07-26 10:40:43.924769] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:31.233 10:40:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.166 10:40:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.425 "name": "raid_bdev1", 00:30:32.425 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:32.425 "strip_size_kb": 0, 00:30:32.425 "state": "online", 00:30:32.425 "raid_level": "raid1", 00:30:32.425 "superblock": true, 00:30:32.425 "num_base_bdevs": 2, 00:30:32.425 "num_base_bdevs_discovered": 2, 00:30:32.425 "num_base_bdevs_operational": 2, 00:30:32.425 "process": { 00:30:32.425 "type": "rebuild", 00:30:32.425 "target": "spare", 00:30:32.425 "progress": { 00:30:32.425 "blocks": 3072, 00:30:32.425 "percent": 38 00:30:32.425 } 00:30:32.425 }, 00:30:32.425 "base_bdevs_list": [ 00:30:32.425 { 00:30:32.425 "name": "spare", 00:30:32.425 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:32.425 "is_configured": true, 00:30:32.425 "data_offset": 256, 00:30:32.425 "data_size": 7936 00:30:32.425 }, 00:30:32.425 { 00:30:32.425 "name": "BaseBdev2", 00:30:32.425 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:32.425 "is_configured": true, 00:30:32.425 "data_offset": 256, 00:30:32.425 "data_size": 7936 00:30:32.425 } 00:30:32.425 ] 00:30:32.425 }' 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.425 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:32.684 [2024-07-26 10:40:45.482227] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:32.684 [2024-07-26 10:40:45.536477] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:32.684 [2024-07-26 10:40:45.536517] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.684 [2024-07-26 10:40:45.536531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:32.684 [2024-07-26 10:40:45.536539] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.684 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.943 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.943 "name": "raid_bdev1", 00:30:32.943 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:32.943 "strip_size_kb": 0, 00:30:32.943 "state": "online", 00:30:32.943 "raid_level": "raid1", 00:30:32.943 "superblock": true, 00:30:32.943 "num_base_bdevs": 2, 00:30:32.943 "num_base_bdevs_discovered": 1, 00:30:32.943 "num_base_bdevs_operational": 1, 00:30:32.943 "base_bdevs_list": [ 00:30:32.943 { 00:30:32.943 "name": null, 00:30:32.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.943 "is_configured": false, 00:30:32.943 "data_offset": 256, 00:30:32.943 "data_size": 7936 00:30:32.943 }, 00:30:32.943 { 00:30:32.943 "name": "BaseBdev2", 00:30:32.943 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:32.943 "is_configured": true, 00:30:32.943 "data_offset": 256, 00:30:32.943 "data_size": 7936 00:30:32.943 } 00:30:32.943 ] 00:30:32.943 }' 00:30:32.943 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.943 10:40:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:33.511 10:40:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:33.770 [2024-07-26 10:40:46.578729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:33.770 [2024-07-26 10:40:46.578774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.770 [2024-07-26 10:40:46.578797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd0640 00:30:33.770 [2024-07-26 10:40:46.578809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.770 [2024-07-26 10:40:46.578985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.770 [2024-07-26 10:40:46.579000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:33.770 [2024-07-26 10:40:46.579050] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:33.770 [2024-07-26 10:40:46.579060] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:33.770 [2024-07-26 10:40:46.579070] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:33.770 [2024-07-26 10:40:46.579086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:33.770 [2024-07-26 10:40:46.582318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1023750 00:30:33.771 [2024-07-26 10:40:46.583644] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:33.771 spare 00:30:33.771 10:40:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.755 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:35.014 "name": "raid_bdev1", 00:30:35.014 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:35.014 "strip_size_kb": 0, 00:30:35.014 "state": "online", 00:30:35.014 "raid_level": "raid1", 00:30:35.014 "superblock": true, 00:30:35.014 "num_base_bdevs": 2, 00:30:35.014 "num_base_bdevs_discovered": 2, 00:30:35.014 "num_base_bdevs_operational": 2, 00:30:35.014 "process": { 00:30:35.014 "type": "rebuild", 00:30:35.014 "target": "spare", 00:30:35.014 "progress": { 00:30:35.014 "blocks": 2816, 00:30:35.014 "percent": 35 00:30:35.014 } 00:30:35.014 }, 00:30:35.014 "base_bdevs_list": [ 00:30:35.014 { 00:30:35.014 "name": "spare", 00:30:35.014 "uuid": "7ea1b9a0-9097-5c17-8c01-626a08caa620", 00:30:35.014 "is_configured": true, 00:30:35.014 "data_offset": 256, 00:30:35.014 "data_size": 7936 00:30:35.014 }, 00:30:35.014 { 00:30:35.014 "name": "BaseBdev2", 00:30:35.014 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:35.014 "is_configured": true, 00:30:35.014 "data_offset": 256, 00:30:35.014 "data_size": 7936 00:30:35.014 } 00:30:35.014 ] 00:30:35.014 }' 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:35.014 10:40:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:35.273 [2024-07-26 10:40:48.088983] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.273 [2024-07-26 10:40:48.094640] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:35.273 [2024-07-26 10:40:48.094679] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.273 [2024-07-26 10:40:48.094693] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.273 [2024-07-26 10:40:48.094701] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.273 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.531 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.531 "name": "raid_bdev1", 00:30:35.531 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:35.531 "strip_size_kb": 0, 00:30:35.531 "state": "online", 00:30:35.531 "raid_level": "raid1", 00:30:35.531 "superblock": true, 00:30:35.531 "num_base_bdevs": 2, 00:30:35.531 "num_base_bdevs_discovered": 1, 00:30:35.531 "num_base_bdevs_operational": 1, 00:30:35.531 "base_bdevs_list": [ 00:30:35.531 { 00:30:35.531 "name": null, 00:30:35.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.531 "is_configured": false, 00:30:35.531 "data_offset": 256, 00:30:35.531 "data_size": 7936 00:30:35.531 }, 00:30:35.531 { 00:30:35.531 "name": "BaseBdev2", 00:30:35.531 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:35.531 "is_configured": true, 00:30:35.531 "data_offset": 256, 00:30:35.531 "data_size": 7936 00:30:35.531 } 00:30:35.531 ] 00:30:35.531 }' 00:30:35.531 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.531 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.097 10:40:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.356 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.356 "name": "raid_bdev1", 00:30:36.356 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:36.356 "strip_size_kb": 0, 00:30:36.356 "state": "online", 00:30:36.356 "raid_level": "raid1", 00:30:36.356 "superblock": true, 00:30:36.356 "num_base_bdevs": 2, 00:30:36.356 "num_base_bdevs_discovered": 1, 00:30:36.356 "num_base_bdevs_operational": 1, 00:30:36.356 "base_bdevs_list": [ 00:30:36.356 { 00:30:36.356 "name": null, 00:30:36.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.356 "is_configured": false, 00:30:36.356 "data_offset": 256, 00:30:36.356 "data_size": 7936 00:30:36.356 }, 00:30:36.356 { 00:30:36.356 "name": "BaseBdev2", 00:30:36.356 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:36.356 "is_configured": true, 00:30:36.356 "data_offset": 256, 00:30:36.356 "data_size": 7936 00:30:36.356 } 00:30:36.356 ] 00:30:36.356 }' 00:30:36.356 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:36.356 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:36.356 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:36.614 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:36.614 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:36.614 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:36.872 [2024-07-26 10:40:49.690347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:36.872 [2024-07-26 10:40:49.690391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.872 [2024-07-26 10:40:49.690408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd0db0 00:30:36.872 [2024-07-26 10:40:49.690420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.872 [2024-07-26 10:40:49.690567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.872 [2024-07-26 10:40:49.690581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:36.872 [2024-07-26 10:40:49.690622] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:36.872 [2024-07-26 10:40:49.690633] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:36.872 [2024-07-26 10:40:49.690642] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:36.872 BaseBdev1 00:30:36.872 10:40:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.244 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.245 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.245 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.245 "name": "raid_bdev1", 00:30:38.245 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:38.245 "strip_size_kb": 0, 00:30:38.245 "state": "online", 00:30:38.245 "raid_level": "raid1", 00:30:38.245 "superblock": true, 00:30:38.245 "num_base_bdevs": 2, 00:30:38.245 "num_base_bdevs_discovered": 1, 00:30:38.245 "num_base_bdevs_operational": 1, 00:30:38.245 "base_bdevs_list": [ 00:30:38.245 { 00:30:38.245 "name": null, 00:30:38.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.245 "is_configured": false, 00:30:38.245 "data_offset": 256, 00:30:38.245 "data_size": 7936 00:30:38.245 }, 00:30:38.245 { 00:30:38.245 "name": "BaseBdev2", 00:30:38.245 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:38.245 "is_configured": true, 00:30:38.245 "data_offset": 256, 00:30:38.245 "data_size": 7936 00:30:38.245 } 00:30:38.245 ] 00:30:38.245 }' 00:30:38.245 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.245 10:40:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.810 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:39.069 "name": "raid_bdev1", 00:30:39.069 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:39.069 "strip_size_kb": 0, 00:30:39.069 "state": "online", 00:30:39.069 "raid_level": "raid1", 00:30:39.069 "superblock": true, 00:30:39.069 "num_base_bdevs": 2, 00:30:39.069 "num_base_bdevs_discovered": 1, 00:30:39.069 "num_base_bdevs_operational": 1, 00:30:39.069 "base_bdevs_list": [ 00:30:39.069 { 00:30:39.069 "name": null, 00:30:39.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.069 "is_configured": false, 00:30:39.069 "data_offset": 256, 00:30:39.069 "data_size": 7936 00:30:39.069 }, 00:30:39.069 { 00:30:39.069 "name": "BaseBdev2", 00:30:39.069 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:39.069 "is_configured": true, 00:30:39.069 "data_offset": 256, 00:30:39.069 "data_size": 7936 00:30:39.069 } 00:30:39.069 ] 00:30:39.069 }' 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:39.069 10:40:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:39.328 [2024-07-26 10:40:52.052609] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:39.328 [2024-07-26 10:40:52.052715] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:39.328 [2024-07-26 10:40:52.052729] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:39.328 request: 00:30:39.328 { 00:30:39.328 "base_bdev": "BaseBdev1", 00:30:39.328 "raid_bdev": "raid_bdev1", 00:30:39.328 "method": "bdev_raid_add_base_bdev", 00:30:39.328 "req_id": 1 00:30:39.328 } 00:30:39.328 Got JSON-RPC error response 00:30:39.328 response: 00:30:39.328 { 00:30:39.328 "code": -22, 00:30:39.328 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:39.328 } 00:30:39.328 10:40:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:30:39.328 10:40:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.328 10:40:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.328 10:40:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.328 10:40:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.265 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.524 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.524 "name": "raid_bdev1", 00:30:40.524 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:40.524 "strip_size_kb": 0, 00:30:40.524 "state": "online", 00:30:40.524 "raid_level": "raid1", 00:30:40.524 "superblock": true, 00:30:40.524 "num_base_bdevs": 2, 00:30:40.524 "num_base_bdevs_discovered": 1, 00:30:40.524 "num_base_bdevs_operational": 1, 00:30:40.524 "base_bdevs_list": [ 00:30:40.524 { 00:30:40.524 "name": null, 00:30:40.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.524 "is_configured": false, 00:30:40.524 "data_offset": 256, 00:30:40.524 "data_size": 7936 00:30:40.524 }, 00:30:40.524 { 00:30:40.524 "name": "BaseBdev2", 00:30:40.524 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:40.524 "is_configured": true, 00:30:40.524 "data_offset": 256, 00:30:40.524 "data_size": 7936 00:30:40.524 } 00:30:40.524 ] 00:30:40.524 }' 00:30:40.524 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.524 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.091 10:40:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:41.350 "name": "raid_bdev1", 00:30:41.350 "uuid": "39761a0e-d6b0-4e83-b047-7f9309f4e7a0", 00:30:41.350 "strip_size_kb": 0, 00:30:41.350 "state": "online", 00:30:41.350 "raid_level": "raid1", 00:30:41.350 "superblock": true, 00:30:41.350 "num_base_bdevs": 2, 00:30:41.350 "num_base_bdevs_discovered": 1, 00:30:41.350 "num_base_bdevs_operational": 1, 00:30:41.350 "base_bdevs_list": [ 00:30:41.350 { 00:30:41.350 "name": null, 00:30:41.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.350 "is_configured": false, 00:30:41.350 "data_offset": 256, 00:30:41.350 "data_size": 7936 00:30:41.350 }, 00:30:41.350 { 00:30:41.350 "name": "BaseBdev2", 00:30:41.350 "uuid": "647a73c2-18ae-5531-86f7-c61565586399", 00:30:41.350 "is_configured": true, 00:30:41.350 "data_offset": 256, 00:30:41.350 "data_size": 7936 00:30:41.350 } 00:30:41.350 ] 00:30:41.350 }' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 3536413 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3536413 ']' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3536413 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:41.350 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3536413 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3536413' 00:30:41.609 killing process with pid 3536413 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 3536413 00:30:41.609 Received shutdown signal, test time was about 60.000000 seconds 00:30:41.609 00:30:41.609 Latency(us) 00:30:41.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.609 =================================================================================================================== 00:30:41.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:41.609 [2024-07-26 10:40:54.271432] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:41.609 [2024-07-26 10:40:54.271513] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:41.609 [2024-07-26 10:40:54.271556] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:41.609 [2024-07-26 10:40:54.271568] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfcd1e0 name raid_bdev1, state offline 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 3536413 00:30:41.609 [2024-07-26 10:40:54.296205] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:30:41.609 00:30:41.609 real 0m27.979s 00:30:41.609 user 0m44.358s 00:30:41.609 sys 0m3.740s 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.609 10:40:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:30:41.609 ************************************ 00:30:41.609 END TEST raid_rebuild_test_sb_md_interleaved 00:30:41.609 ************************************ 00:30:41.869 10:40:54 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:30:41.869 10:40:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:30:41.869 10:40:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 3536413 ']' 00:30:41.869 10:40:54 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 3536413 00:30:41.869 10:40:54 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:30:41.869 00:30:41.869 real 17m47.686s 00:30:41.869 user 30m3.650s 00:30:41.869 sys 3m16.344s 00:30:41.869 10:40:54 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:41.869 10:40:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:41.869 ************************************ 00:30:41.869 END TEST bdev_raid 00:30:41.869 ************************************ 00:30:41.869 10:40:54 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:30:41.869 10:40:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:41.869 10:40:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.869 10:40:54 -- common/autotest_common.sh@10 -- # set +x 00:30:41.869 ************************************ 00:30:41.869 START TEST bdevperf_config 00:30:41.869 ************************************ 00:30:41.869 10:40:54 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:30:41.869 * Looking for test storage... 00:30:41.869 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:30:41.869 10:40:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:42.129 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:42.129 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:42.129 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:42.129 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:42.129 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:42.129 10:40:54 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:44.663 10:40:57 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-26 10:40:54.865177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:44.663 [2024-07-26 10:40:54.865239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541613 ] 00:30:44.663 Using job config with 4 jobs 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.663 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:44.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:44.664 [2024-07-26 10:40:55.015183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.664 [2024-07-26 10:40:55.074024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.664 cpumask for '\''job0'\'' is too big 00:30:44.664 cpumask for '\''job1'\'' is too big 00:30:44.664 cpumask for '\''job2'\'' is too big 00:30:44.664 cpumask for '\''job3'\'' is too big 00:30:44.664 Running I/O for 2 seconds... 00:30:44.664 00:30:44.664 Latency(us) 00:30:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25852.16 25.25 0.00 0.00 9892.95 1703.94 15099.49 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25830.14 25.22 0.00 0.00 9880.81 1690.83 13369.34 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25808.26 25.20 0.00 0.00 9867.97 1690.83 11691.62 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.03 25786.51 25.18 0.00 0.00 9856.25 1677.72 10485.76 00:30:44.664 =================================================================================================================== 00:30:44.664 Total : 103277.07 100.86 0.00 0.00 9874.50 1677.72 15099.49' 00:30:44.664 10:40:57 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-26 10:40:54.865177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:44.664 [2024-07-26 10:40:54.865239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541613 ] 00:30:44.664 Using job config with 4 jobs 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:44.664 [2024-07-26 10:40:55.015183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.664 [2024-07-26 10:40:55.074024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.664 cpumask for '\''job0'\'' is too big 00:30:44.664 cpumask for '\''job1'\'' is too big 00:30:44.664 cpumask for '\''job2'\'' is too big 00:30:44.664 cpumask for '\''job3'\'' is too big 00:30:44.664 Running I/O for 2 seconds... 00:30:44.664 00:30:44.664 Latency(us) 00:30:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25852.16 25.25 0.00 0.00 9892.95 1703.94 15099.49 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25830.14 25.22 0.00 0.00 9880.81 1690.83 13369.34 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.02 25808.26 25.20 0.00 0.00 9867.97 1690.83 11691.62 00:30:44.664 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.664 Malloc0 : 2.03 25786.51 25.18 0.00 0.00 9856.25 1677.72 10485.76 00:30:44.664 =================================================================================================================== 00:30:44.664 Total : 103277.07 100.86 0.00 0.00 9874.50 1677.72 15099.49' 00:30:44.664 10:40:57 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 10:40:54.865177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:44.664 [2024-07-26 10:40:54.865239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541613 ] 00:30:44.664 Using job config with 4 jobs 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:44.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.664 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:44.665 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.665 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:44.665 [2024-07-26 10:40:55.015183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.665 [2024-07-26 10:40:55.074024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.665 cpumask for '\''job0'\'' is too big 00:30:44.665 cpumask for '\''job1'\'' is too big 00:30:44.665 cpumask for '\''job2'\'' is too big 00:30:44.665 cpumask for '\''job3'\'' is too big 00:30:44.665 Running I/O for 2 seconds... 00:30:44.665 00:30:44.665 Latency(us) 00:30:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.665 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.665 Malloc0 : 2.02 25852.16 25.25 0.00 0.00 9892.95 1703.94 15099.49 00:30:44.665 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.665 Malloc0 : 2.02 25830.14 25.22 0.00 0.00 9880.81 1690.83 13369.34 00:30:44.665 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.665 Malloc0 : 2.02 25808.26 25.20 0.00 0.00 9867.97 1690.83 11691.62 00:30:44.665 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:44.665 Malloc0 : 2.03 25786.51 25.18 0.00 0.00 9856.25 1677.72 10485.76 00:30:44.665 =================================================================================================================== 00:30:44.665 Total : 103277.07 100.86 0.00 0.00 9874.50 1677.72 15099.49' 00:30:44.665 10:40:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:44.665 10:40:57 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:44.665 10:40:57 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:30:44.665 10:40:57 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:44.665 [2024-07-26 10:40:57.505105] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:44.665 [2024-07-26 10:40:57.505181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542059 ] 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:44.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:44.924 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:44.924 [2024-07-26 10:40:57.653953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.924 [2024-07-26 10:40:57.715643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.183 cpumask for 'job0' is too big 00:30:45.183 cpumask for 'job1' is too big 00:30:45.183 cpumask for 'job2' is too big 00:30:45.183 cpumask for 'job3' is too big 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:30:47.712 Running I/O for 2 seconds... 00:30:47.712 00:30:47.712 Latency(us) 00:30:47.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.712 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:47.712 Malloc0 : 2.02 25773.60 25.17 0.00 0.00 9922.35 1703.94 15204.35 00:30:47.712 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:47.712 Malloc0 : 2.02 25750.80 25.15 0.00 0.00 9910.70 1690.83 13526.63 00:30:47.712 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:47.712 Malloc0 : 2.02 25728.25 25.13 0.00 0.00 9898.77 1677.72 11744.05 00:30:47.712 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:30:47.712 Malloc0 : 2.02 25705.72 25.10 0.00 0.00 9887.08 1677.72 10223.62 00:30:47.712 =================================================================================================================== 00:30:47.712 Total : 102958.37 100.55 0.00 0.00 9904.73 1677.72 15204.35' 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:30:47.712 10:41:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:47.713 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:47.713 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:47.713 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:47.713 10:41:00 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:50.273 10:41:02 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-26 10:41:00.155186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:50.273 [2024-07-26 10:41:00.155249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542414 ] 00:30:50.273 Using job config with 3 jobs 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:50.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.273 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:50.274 [2024-07-26 10:41:00.304694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.274 [2024-07-26 10:41:00.357957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.274 cpumask for '\''job0'\'' is too big 00:30:50.274 cpumask for '\''job1'\'' is too big 00:30:50.274 cpumask for '\''job2'\'' is too big 00:30:50.274 Running I/O for 2 seconds... 00:30:50.274 00:30:50.274 Latency(us) 00:30:50.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.01 34636.65 33.82 0.00 0.00 7386.73 1703.94 10905.19 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.01 34607.18 33.80 0.00 0.00 7377.81 1690.83 9122.61 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.02 34661.84 33.85 0.00 0.00 7350.41 865.08 7916.75 00:30:50.274 =================================================================================================================== 00:30:50.274 Total : 103905.67 101.47 0.00 0.00 7371.62 865.08 10905.19' 00:30:50.274 10:41:02 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-26 10:41:00.155186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:50.274 [2024-07-26 10:41:00.155249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542414 ] 00:30:50.274 Using job config with 3 jobs 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:50.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.274 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:50.274 [2024-07-26 10:41:00.304694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.274 [2024-07-26 10:41:00.357957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.274 cpumask for '\''job0'\'' is too big 00:30:50.274 cpumask for '\''job1'\'' is too big 00:30:50.274 cpumask for '\''job2'\'' is too big 00:30:50.274 Running I/O for 2 seconds... 00:30:50.274 00:30:50.274 Latency(us) 00:30:50.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.01 34636.65 33.82 0.00 0.00 7386.73 1703.94 10905.19 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.01 34607.18 33.80 0.00 0.00 7377.81 1690.83 9122.61 00:30:50.274 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.274 Malloc0 : 2.02 34661.84 33.85 0.00 0.00 7350.41 865.08 7916.75 00:30:50.274 =================================================================================================================== 00:30:50.274 Total : 103905.67 101.47 0.00 0.00 7371.62 865.08 10905.19' 00:30:50.274 10:41:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:50.274 10:41:02 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 10:41:00.155186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:50.274 [2024-07-26 10:41:00.155249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542414 ] 00:30:50.275 Using job config with 3 jobs 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:50.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:50.275 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:50.275 [2024-07-26 10:41:00.304694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.275 [2024-07-26 10:41:00.357957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.275 cpumask for '\''job0'\'' is too big 00:30:50.275 cpumask for '\''job1'\'' is too big 00:30:50.275 cpumask for '\''job2'\'' is too big 00:30:50.275 Running I/O for 2 seconds... 00:30:50.275 00:30:50.275 Latency(us) 00:30:50.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.275 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.275 Malloc0 : 2.01 34636.65 33.82 0.00 0.00 7386.73 1703.94 10905.19 00:30:50.275 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.275 Malloc0 : 2.01 34607.18 33.80 0.00 0.00 7377.81 1690.83 9122.61 00:30:50.275 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:30:50.275 Malloc0 : 2.02 34661.84 33.85 0.00 0.00 7350.41 865.08 7916.75 00:30:50.275 =================================================================================================================== 00:30:50.275 Total : 103905.67 101.47 0.00 0.00 7371.62 865.08 10905.19' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:50.275 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:50.275 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:50.275 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:50.275 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:30:50.275 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:30:50.275 10:41:02 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:52.801 10:41:05 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-26 10:41:02.806293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:52.801 [2024-07-26 10:41:02.806357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542946 ] 00:30:52.801 Using job config with 4 jobs 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:52.801 [2024-07-26 10:41:02.951606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.801 [2024-07-26 10:41:03.013794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.801 cpumask for '\''job0'\'' is too big 00:30:52.801 cpumask for '\''job1'\'' is too big 00:30:52.801 cpumask for '\''job2'\'' is too big 00:30:52.801 cpumask for '\''job3'\'' is too big 00:30:52.801 Running I/O for 2 seconds... 00:30:52.801 00:30:52.801 Latency(us) 00:30:52.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.801 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc0 : 2.03 12889.90 12.59 0.00 0.00 19840.24 3486.52 30618.42 00:30:52.801 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc1 : 2.03 12878.76 12.58 0.00 0.00 19841.24 4299.16 30618.42 00:30:52.801 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc0 : 2.03 12867.90 12.57 0.00 0.00 19790.37 3486.52 27053.26 00:30:52.801 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc1 : 2.04 12903.49 12.60 0.00 0.00 19720.23 4246.73 27053.26 00:30:52.801 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc0 : 2.05 12892.71 12.59 0.00 0.00 19672.19 3486.52 23488.10 00:30:52.801 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc1 : 2.05 12881.62 12.58 0.00 0.00 19671.36 4272.95 23488.10 00:30:52.801 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc0 : 2.05 12870.84 12.57 0.00 0.00 19623.42 3486.52 20447.23 00:30:52.801 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.801 Malloc1 : 2.05 12859.89 12.56 0.00 0.00 19623.16 4272.95 20656.95 00:30:52.801 =================================================================================================================== 00:30:52.801 Total : 103045.11 100.63 0.00 0.00 19722.41 3486.52 30618.42' 00:30:52.801 10:41:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-26 10:41:02.806293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:52.801 [2024-07-26 10:41:02.806357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542946 ] 00:30:52.801 Using job config with 4 jobs 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:52.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.801 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:52.802 [2024-07-26 10:41:02.951606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.802 [2024-07-26 10:41:03.013794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.802 cpumask for '\''job0'\'' is too big 00:30:52.802 cpumask for '\''job1'\'' is too big 00:30:52.802 cpumask for '\''job2'\'' is too big 00:30:52.802 cpumask for '\''job3'\'' is too big 00:30:52.802 Running I/O for 2 seconds... 00:30:52.802 00:30:52.802 Latency(us) 00:30:52.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.03 12889.90 12.59 0.00 0.00 19840.24 3486.52 30618.42 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.03 12878.76 12.58 0.00 0.00 19841.24 4299.16 30618.42 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.03 12867.90 12.57 0.00 0.00 19790.37 3486.52 27053.26 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.04 12903.49 12.60 0.00 0.00 19720.23 4246.73 27053.26 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.05 12892.71 12.59 0.00 0.00 19672.19 3486.52 23488.10 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.05 12881.62 12.58 0.00 0.00 19671.36 4272.95 23488.10 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.05 12870.84 12.57 0.00 0.00 19623.42 3486.52 20447.23 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.05 12859.89 12.56 0.00 0.00 19623.16 4272.95 20656.95 00:30:52.802 =================================================================================================================== 00:30:52.802 Total : 103045.11 100.63 0.00 0.00 19722.41 3486.52 30618.42' 00:30:52.802 10:41:05 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 10:41:02.806293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:52.802 [2024-07-26 10:41:02.806357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542946 ] 00:30:52.802 Using job config with 4 jobs 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:52.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.802 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:52.802 [2024-07-26 10:41:02.951606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.802 [2024-07-26 10:41:03.013794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.802 cpumask for '\''job0'\'' is too big 00:30:52.802 cpumask for '\''job1'\'' is too big 00:30:52.802 cpumask for '\''job2'\'' is too big 00:30:52.802 cpumask for '\''job3'\'' is too big 00:30:52.802 Running I/O for 2 seconds... 00:30:52.802 00:30:52.802 Latency(us) 00:30:52.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.03 12889.90 12.59 0.00 0.00 19840.24 3486.52 30618.42 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.03 12878.76 12.58 0.00 0.00 19841.24 4299.16 30618.42 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.03 12867.90 12.57 0.00 0.00 19790.37 3486.52 27053.26 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.04 12903.49 12.60 0.00 0.00 19720.23 4246.73 27053.26 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.05 12892.71 12.59 0.00 0.00 19672.19 3486.52 23488.10 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.05 12881.62 12.58 0.00 0.00 19671.36 4272.95 23488.10 00:30:52.802 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc0 : 2.05 12870.84 12.57 0.00 0.00 19623.42 3486.52 20447.23 00:30:52.802 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:52.802 Malloc1 : 2.05 12859.89 12.56 0.00 0.00 19623.16 4272.95 20656.95 00:30:52.802 =================================================================================================================== 00:30:52.802 Total : 103045.11 100.63 0.00 0.00 19722.41 3486.52 30618.42' 00:30:52.802 10:41:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:52.802 10:41:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:52.802 10:41:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:30:52.802 10:41:05 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:30:52.803 10:41:05 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:30:52.803 10:41:05 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:52.803 00:30:52.803 real 0m10.769s 00:30:52.803 user 0m9.487s 00:30:52.803 sys 0m1.131s 00:30:52.803 10:41:05 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.803 10:41:05 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:30:52.803 ************************************ 00:30:52.803 END TEST bdevperf_config 00:30:52.803 ************************************ 00:30:52.803 10:41:05 -- spdk/autotest.sh@196 -- # uname -s 00:30:52.803 10:41:05 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:30:52.803 10:41:05 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:52.803 10:41:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:52.803 10:41:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.803 10:41:05 -- common/autotest_common.sh@10 -- # set +x 00:30:52.803 ************************************ 00:30:52.803 START TEST reactor_set_interrupt 00:30:52.803 ************************************ 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:52.803 * Looking for test storage... 00:30:52.803 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:52.803 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:52.803 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:30:52.803 10:41:05 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:52.804 10:41:05 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:52.804 10:41:05 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:30:52.804 10:41:05 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:52.804 10:41:05 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:52.804 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:52.804 #define SPDK_CONFIG_H 00:30:52.804 #define SPDK_CONFIG_APPS 1 00:30:52.804 #define SPDK_CONFIG_ARCH native 00:30:52.804 #undef SPDK_CONFIG_ASAN 00:30:52.804 #undef SPDK_CONFIG_AVAHI 00:30:52.804 #undef SPDK_CONFIG_CET 00:30:52.804 #define SPDK_CONFIG_COVERAGE 1 00:30:52.804 #define SPDK_CONFIG_CROSS_PREFIX 00:30:52.804 #define SPDK_CONFIG_CRYPTO 1 00:30:52.804 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:30:52.804 #undef SPDK_CONFIG_CUSTOMOCF 00:30:52.804 #undef SPDK_CONFIG_DAOS 00:30:52.804 #define SPDK_CONFIG_DAOS_DIR 00:30:52.804 #define SPDK_CONFIG_DEBUG 1 00:30:52.804 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:30:52.804 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:30:52.804 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:30:52.804 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:30:52.804 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:52.804 #undef SPDK_CONFIG_DPDK_UADK 00:30:52.804 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:30:52.804 #define SPDK_CONFIG_EXAMPLES 1 00:30:52.804 #undef SPDK_CONFIG_FC 00:30:52.804 #define SPDK_CONFIG_FC_PATH 00:30:52.804 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:52.804 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:52.804 #undef SPDK_CONFIG_FUSE 00:30:52.804 #undef SPDK_CONFIG_FUZZER 00:30:52.804 #define SPDK_CONFIG_FUZZER_LIB 00:30:52.804 #undef SPDK_CONFIG_GOLANG 00:30:52.804 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:52.804 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:52.804 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:52.804 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:52.804 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:52.804 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:52.804 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:52.804 #define SPDK_CONFIG_IDXD 1 00:30:52.804 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:52.804 #define SPDK_CONFIG_IPSEC_MB 1 00:30:52.804 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:30:52.804 #define SPDK_CONFIG_ISAL 1 00:30:52.804 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:52.804 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:52.804 #define SPDK_CONFIG_LIBDIR 00:30:52.804 #undef SPDK_CONFIG_LTO 00:30:52.804 #define SPDK_CONFIG_MAX_LCORES 128 00:30:52.804 #define SPDK_CONFIG_NVME_CUSE 1 00:30:52.804 #undef SPDK_CONFIG_OCF 00:30:52.804 #define SPDK_CONFIG_OCF_PATH 00:30:52.804 #define SPDK_CONFIG_OPENSSL_PATH 00:30:52.804 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:52.804 #define SPDK_CONFIG_PGO_DIR 00:30:52.804 #undef SPDK_CONFIG_PGO_USE 00:30:52.804 #define SPDK_CONFIG_PREFIX /usr/local 00:30:52.804 #undef SPDK_CONFIG_RAID5F 00:30:52.804 #undef SPDK_CONFIG_RBD 00:30:52.804 #define SPDK_CONFIG_RDMA 1 00:30:52.804 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:52.804 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:52.804 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:52.804 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:52.804 #define SPDK_CONFIG_SHARED 1 00:30:52.804 #undef SPDK_CONFIG_SMA 00:30:52.804 #define SPDK_CONFIG_TESTS 1 00:30:52.804 #undef SPDK_CONFIG_TSAN 00:30:52.804 #define SPDK_CONFIG_UBLK 1 00:30:52.804 #define SPDK_CONFIG_UBSAN 1 00:30:52.804 #undef SPDK_CONFIG_UNIT_TESTS 00:30:52.804 #undef SPDK_CONFIG_URING 00:30:52.804 #define SPDK_CONFIG_URING_PATH 00:30:52.804 #undef SPDK_CONFIG_URING_ZNS 00:30:52.804 #undef SPDK_CONFIG_USDT 00:30:52.804 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:30:52.804 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:30:52.804 #undef SPDK_CONFIG_VFIO_USER 00:30:52.804 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:52.804 #define SPDK_CONFIG_VHOST 1 00:30:52.804 #define SPDK_CONFIG_VIRTIO 1 00:30:52.804 #undef SPDK_CONFIG_VTUNE 00:30:52.804 #define SPDK_CONFIG_VTUNE_DIR 00:30:52.804 #define SPDK_CONFIG_WERROR 1 00:30:52.804 #define SPDK_CONFIG_WPDK_DIR 00:30:52.804 #undef SPDK_CONFIG_XNVME 00:30:52.804 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:52.804 10:41:05 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:52.804 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:52.804 10:41:05 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.804 10:41:05 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.804 10:41:05 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.804 10:41:05 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.804 10:41:05 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.804 10:41:05 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.804 10:41:05 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:30:52.804 10:41:05 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.804 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:52.804 10:41:05 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:30:52.804 10:41:05 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:52.804 10:41:05 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:30:52.804 10:41:05 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:52.804 10:41:05 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:53.064 10:41:05 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:30:53.064 10:41:05 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:30:53.064 10:41:05 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:53.065 10:41:05 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : v22.11.4 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:30:53.065 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 3543474 ]] 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 3543474 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.dmKbqb 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.dmKbqb/tests/interrupt /tmp/spdk.dmKbqb 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:30:53.066 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=53497655296 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=8244649984 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338679808 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9781248 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30869897216 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1257472 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:30:53.067 * Looking for test storage... 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=53497655296 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=10459242496 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:53.067 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3543552 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:53.067 10:41:05 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3543552 /var/tmp/spdk.sock 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3543552 ']' 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.067 10:41:05 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:53.067 [2024-07-26 10:41:05.851738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:53.067 [2024-07-26 10:41:05.851802] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543552 ] 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:53.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.067 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:53.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:53.068 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:53.327 [2024-07-26 10:41:05.983545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:53.327 [2024-07-26 10:41:06.028229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.327 [2024-07-26 10:41:06.028327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.327 [2024-07-26 10:41:06.028332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.327 [2024-07-26 10:41:06.090777] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:53.893 10:41:06 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.893 10:41:06 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:30:53.893 10:41:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:30:53.893 10:41:06 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:54.151 Malloc0 00:30:54.151 Malloc1 00:30:54.151 Malloc2 00:30:54.151 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:30:54.151 10:41:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:30:54.151 10:41:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:54.151 10:41:07 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:54.409 5000+0 records in 00:30:54.410 5000+0 records out 00:30:54.410 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0270172 s, 379 MB/s 00:30:54.410 10:41:07 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:54.410 AIO0 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 3543552 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 3543552 without_thd 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=3543552 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:54.668 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:54.926 10:41:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:30:54.926 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:54.926 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:54.926 spdk_thread ids are 1 on reactor0. 00:30:54.926 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:54.926 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3543552 0 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3543552 0 idle 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:54.927 10:41:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543552 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.33 reactor_0' 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543552 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.33 reactor_0 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3543552 1 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3543552 1 idle 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:55.185 10:41:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543555 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_1' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543555 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_1 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3543552 2 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3543552 2 idle 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543556 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_2' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543556 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_2 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:55.443 10:41:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:55.444 10:41:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:55.444 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:30:55.444 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:30:55.444 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:30:55.701 [2024-07-26 10:41:08.537185] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.701 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:30:55.958 [2024-07-26 10:41:08.764805] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:30:55.958 [2024-07-26 10:41:08.765070] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:55.958 10:41:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:56.217 [2024-07-26 10:41:08.988784] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:56.217 [2024-07-26 10:41:08.988887] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3543552 0 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3543552 0 busy 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:56.217 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543552 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.73 reactor_0' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543552 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.73 reactor_0 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3543552 2 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3543552 2 busy 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543556 root 20 0 128.2g 34048 21504 R 93.8 0.1 0:00.36 reactor_2' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543556 root 20 0 128.2g 34048 21504 R 93.8 0.1 0:00.36 reactor_2 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:56.475 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:56.733 [2024-07-26 10:41:09.580785] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:56.733 [2024-07-26 10:41:09.580876] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 3543552 2 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3543552 2 idle 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:56.733 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543556 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.59 reactor_2' 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543556 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.59 reactor_2 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:56.992 10:41:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:57.250 [2024-07-26 10:41:09.992780] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:57.250 [2024-07-26 10:41:09.992896] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:57.250 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:30:57.250 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:30:57.250 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:30:57.508 [2024-07-26 10:41:10.225254] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 3543552 0 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3543552 0 idle 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3543552 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3543552 -w 256 00:30:57.508 10:41:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3543552 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:01.55 reactor_0' 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3543552 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:01.55 reactor_0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:30:57.767 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 3543552 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3543552 ']' 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 3543552 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3543552 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3543552' 00:30:57.767 killing process with pid 3543552 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 3543552 00:30:57.767 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 3543552 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3544425 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:58.026 10:41:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3544425 /var/tmp/spdk.sock 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3544425 ']' 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:58.026 10:41:10 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:58.026 [2024-07-26 10:41:10.735990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:30:58.026 [2024-07-26 10:41:10.736052] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3544425 ] 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:58.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.026 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:58.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:58.027 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:58.027 [2024-07-26 10:41:10.869783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:58.027 [2024-07-26 10:41:10.916500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.027 [2024-07-26 10:41:10.916521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.027 [2024-07-26 10:41:10.916524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.286 [2024-07-26 10:41:10.979578] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:58.854 10:41:11 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.854 10:41:11 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:30:58.854 10:41:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:30:58.854 10:41:11 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:59.113 Malloc0 00:30:59.113 Malloc1 00:30:59.113 Malloc2 00:30:59.113 10:41:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:30:59.113 10:41:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:30:59.113 10:41:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:59.113 10:41:11 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:59.113 5000+0 records in 00:30:59.113 5000+0 records out 00:30:59.113 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0119404 s, 858 MB/s 00:30:59.113 10:41:11 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:30:59.372 AIO0 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 3544425 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 3544425 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=3544425 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:59.372 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:59.631 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:30:59.631 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:30:59.632 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:59.891 spdk_thread ids are 1 on reactor0. 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3544425 0 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3544425 0 idle 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:30:59.891 10:41:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544425 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.33 reactor_0' 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544425 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.33 reactor_0 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3544425 1 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3544425 1 idle 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:00.150 10:41:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544429 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_1' 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544429 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_1 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3544425 2 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3544425 2 idle 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:00.150 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544430 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_2' 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544430 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.00 reactor_2 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:31:00.409 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:31:00.669 [2024-07-26 10:41:13.409120] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:31:00.669 [2024-07-26 10:41:13.409347] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:31:00.669 [2024-07-26 10:41:13.409423] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:31:00.669 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:31:00.929 [2024-07-26 10:41:13.637610] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:31:00.929 [2024-07-26 10:41:13.637790] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3544425 0 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3544425 0 busy 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544425 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.75 reactor_0' 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544425 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.75 reactor_0 00:31:00.929 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3544425 2 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3544425 2 busy 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:31:01.187 10:41:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544430 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.35 reactor_2' 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544430 root 20 0 128.2g 34048 21504 R 99.9 0.1 0:00.35 reactor_2 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:01.187 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:31:01.446 [2024-07-26 10:41:14.211232] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:31:01.446 [2024-07-26 10:41:14.211499] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 3544425 2 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3544425 2 idle 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:01.446 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544430 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.57 reactor_2' 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544430 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:00.57 reactor_2 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:01.704 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:31:01.962 [2024-07-26 10:41:14.680433] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:31:01.962 [2024-07-26 10:41:14.680609] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:31:01.962 [2024-07-26 10:41:14.680630] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 3544425 0 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3544425 0 idle 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3544425 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3544425 -w 256 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:31:01.962 10:41:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3544425 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:01.61 reactor_0' 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3544425 root 20 0 128.2g 34048 21504 S 0.0 0.1 0:01.61 reactor_0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:31:02.220 10:41:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 3544425 00:31:02.220 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3544425 ']' 00:31:02.220 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 3544425 00:31:02.220 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3544425 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3544425' 00:31:02.221 killing process with pid 3544425 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 3544425 00:31:02.221 10:41:14 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 3544425 00:31:02.480 10:41:15 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:31:02.480 10:41:15 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:31:02.480 00:31:02.480 real 0m9.629s 00:31:02.480 user 0m8.926s 00:31:02.480 sys 0m2.039s 00:31:02.480 10:41:15 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.480 10:41:15 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:02.480 ************************************ 00:31:02.480 END TEST reactor_set_interrupt 00:31:02.480 ************************************ 00:31:02.480 10:41:15 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:31:02.480 10:41:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:02.480 10:41:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.480 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:31:02.480 ************************************ 00:31:02.480 START TEST reap_unregistered_poller 00:31:02.480 ************************************ 00:31:02.480 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:31:02.480 * Looking for test storage... 00:31:02.480 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.480 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:02.481 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:31:02.481 10:41:15 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:31:02.481 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:31:02.481 10:41:15 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:31:02.482 #define SPDK_CONFIG_H 00:31:02.482 #define SPDK_CONFIG_APPS 1 00:31:02.482 #define SPDK_CONFIG_ARCH native 00:31:02.482 #undef SPDK_CONFIG_ASAN 00:31:02.482 #undef SPDK_CONFIG_AVAHI 00:31:02.482 #undef SPDK_CONFIG_CET 00:31:02.482 #define SPDK_CONFIG_COVERAGE 1 00:31:02.482 #define SPDK_CONFIG_CROSS_PREFIX 00:31:02.482 #define SPDK_CONFIG_CRYPTO 1 00:31:02.482 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:31:02.482 #undef SPDK_CONFIG_CUSTOMOCF 00:31:02.482 #undef SPDK_CONFIG_DAOS 00:31:02.482 #define SPDK_CONFIG_DAOS_DIR 00:31:02.482 #define SPDK_CONFIG_DEBUG 1 00:31:02.482 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:31:02.482 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:31:02.482 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/crypto-phy-autotest/dpdk/build/include 00:31:02.482 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:31:02.482 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:31:02.482 #undef SPDK_CONFIG_DPDK_UADK 00:31:02.482 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:31:02.482 #define SPDK_CONFIG_EXAMPLES 1 00:31:02.482 #undef SPDK_CONFIG_FC 00:31:02.482 #define SPDK_CONFIG_FC_PATH 00:31:02.482 #define SPDK_CONFIG_FIO_PLUGIN 1 00:31:02.482 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:31:02.482 #undef SPDK_CONFIG_FUSE 00:31:02.482 #undef SPDK_CONFIG_FUZZER 00:31:02.482 #define SPDK_CONFIG_FUZZER_LIB 00:31:02.482 #undef SPDK_CONFIG_GOLANG 00:31:02.482 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:31:02.482 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:31:02.482 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:31:02.482 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:31:02.482 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:31:02.482 #undef SPDK_CONFIG_HAVE_LIBBSD 00:31:02.482 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:31:02.482 #define SPDK_CONFIG_IDXD 1 00:31:02.482 #define SPDK_CONFIG_IDXD_KERNEL 1 00:31:02.482 #define SPDK_CONFIG_IPSEC_MB 1 00:31:02.482 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib 00:31:02.482 #define SPDK_CONFIG_ISAL 1 00:31:02.482 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:31:02.482 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:31:02.482 #define SPDK_CONFIG_LIBDIR 00:31:02.482 #undef SPDK_CONFIG_LTO 00:31:02.482 #define SPDK_CONFIG_MAX_LCORES 128 00:31:02.482 #define SPDK_CONFIG_NVME_CUSE 1 00:31:02.482 #undef SPDK_CONFIG_OCF 00:31:02.482 #define SPDK_CONFIG_OCF_PATH 00:31:02.482 #define SPDK_CONFIG_OPENSSL_PATH 00:31:02.482 #undef SPDK_CONFIG_PGO_CAPTURE 00:31:02.482 #define SPDK_CONFIG_PGO_DIR 00:31:02.482 #undef SPDK_CONFIG_PGO_USE 00:31:02.482 #define SPDK_CONFIG_PREFIX /usr/local 00:31:02.482 #undef SPDK_CONFIG_RAID5F 00:31:02.482 #undef SPDK_CONFIG_RBD 00:31:02.482 #define SPDK_CONFIG_RDMA 1 00:31:02.482 #define SPDK_CONFIG_RDMA_PROV verbs 00:31:02.482 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:31:02.482 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:31:02.482 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:31:02.482 #define SPDK_CONFIG_SHARED 1 00:31:02.482 #undef SPDK_CONFIG_SMA 00:31:02.482 #define SPDK_CONFIG_TESTS 1 00:31:02.482 #undef SPDK_CONFIG_TSAN 00:31:02.482 #define SPDK_CONFIG_UBLK 1 00:31:02.482 #define SPDK_CONFIG_UBSAN 1 00:31:02.482 #undef SPDK_CONFIG_UNIT_TESTS 00:31:02.482 #undef SPDK_CONFIG_URING 00:31:02.482 #define SPDK_CONFIG_URING_PATH 00:31:02.482 #undef SPDK_CONFIG_URING_ZNS 00:31:02.482 #undef SPDK_CONFIG_USDT 00:31:02.482 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:31:02.482 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:31:02.482 #undef SPDK_CONFIG_VFIO_USER 00:31:02.482 #define SPDK_CONFIG_VFIO_USER_DIR 00:31:02.482 #define SPDK_CONFIG_VHOST 1 00:31:02.482 #define SPDK_CONFIG_VIRTIO 1 00:31:02.482 #undef SPDK_CONFIG_VTUNE 00:31:02.482 #define SPDK_CONFIG_VTUNE_DIR 00:31:02.482 #define SPDK_CONFIG_WERROR 1 00:31:02.482 #define SPDK_CONFIG_WPDK_DIR 00:31:02.482 #undef SPDK_CONFIG_XNVME 00:31:02.482 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:31:02.482 10:41:15 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:31:02.482 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:31:02.482 10:41:15 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.482 10:41:15 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.482 10:41:15 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.482 10:41:15 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.482 10:41:15 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.482 10:41:15 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.482 10:41:15 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:31:02.482 10:41:15 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.482 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:31:02.482 10:41:15 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:31:02.744 10:41:15 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:31:02.744 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : v22.11.4 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/intel-ipsec-mb/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/isa-l/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:31:02.745 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 3545325 ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 3545325 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.xfuJar 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.xfuJar/tests/interrupt /tmp/spdk.xfuJar 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=53497487360 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=8244817920 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338675712 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9785344 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30869897216 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1257472 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:31:02.746 * Looking for test storage... 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=53497487360 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=10459410432 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.746 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:31:02.746 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3545366 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3545366 /var/tmp/spdk.sock 00:31:02.747 10:41:15 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 3545366 ']' 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:02.747 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:31:02.747 [2024-07-26 10:41:15.542701] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:02.747 [2024-07-26 10:41:15.542762] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545366 ] 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:02.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:02.747 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:03.008 [2024-07-26 10:41:15.666717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:03.009 [2024-07-26 10:41:15.711694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.009 [2024-07-26 10:41:15.711788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.009 [2024-07-26 10:41:15.711791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.009 [2024-07-26 10:41:15.773855] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:03.009 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.009 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:31:03.009 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.009 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:31:03.009 10:41:15 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:31:03.009 "name": "app_thread", 00:31:03.009 "id": 1, 00:31:03.009 "active_pollers": [], 00:31:03.009 "timed_pollers": [ 00:31:03.009 { 00:31:03.009 "name": "rpc_subsystem_poll_servers", 00:31:03.009 "id": 1, 00:31:03.009 "state": "waiting", 00:31:03.009 "run_count": 0, 00:31:03.009 "busy_count": 0, 00:31:03.009 "period_ticks": 10000000 00:31:03.009 } 00:31:03.009 ], 00:31:03.009 "paused_pollers": [] 00:31:03.009 }' 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:31:03.009 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:31:03.310 10:41:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:31:03.310 10:41:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:31:03.310 10:41:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:03.310 10:41:15 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:31:03.310 5000+0 records in 00:31:03.310 5000+0 records out 00:31:03.310 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0258818 s, 396 MB/s 00:31:03.310 10:41:15 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:31:03.310 AIO0 00:31:03.310 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:03.581 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:31:03.840 "name": "app_thread", 00:31:03.840 "id": 1, 00:31:03.840 "active_pollers": [], 00:31:03.840 "timed_pollers": [ 00:31:03.840 { 00:31:03.840 "name": "rpc_subsystem_poll_servers", 00:31:03.840 "id": 1, 00:31:03.840 "state": "waiting", 00:31:03.840 "run_count": 0, 00:31:03.840 "busy_count": 0, 00:31:03.840 "period_ticks": 10000000 00:31:03.840 } 00:31:03.840 ], 00:31:03.840 "paused_pollers": [] 00:31:03.840 }' 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:31:03.840 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 3545366 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 3545366 ']' 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 3545366 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3545366 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3545366' 00:31:03.840 killing process with pid 3545366 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 3545366 00:31:03.840 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 3545366 00:31:04.098 10:41:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:31:04.098 10:41:16 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:31:04.098 00:31:04.098 real 0m1.699s 00:31:04.098 user 0m1.209s 00:31:04.098 sys 0m0.602s 00:31:04.098 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.098 10:41:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:31:04.098 ************************************ 00:31:04.098 END TEST reap_unregistered_poller 00:31:04.098 ************************************ 00:31:04.098 10:41:16 -- spdk/autotest.sh@202 -- # uname -s 00:31:04.098 10:41:16 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:31:04.098 10:41:16 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:31:04.099 10:41:16 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:31:04.099 10:41:16 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:31:04.099 10:41:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:31:04.099 10:41:16 -- spdk/autotest.sh@264 -- # timing_exit lib 00:31:04.099 10:41:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:04.099 10:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:04.357 10:41:17 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:31:04.357 10:41:17 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:31:04.357 10:41:17 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:31:04.357 10:41:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:04.357 10:41:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:31:04.358 10:41:17 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:31:04.358 10:41:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:04.358 10:41:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.358 10:41:17 -- common/autotest_common.sh@10 -- # set +x 00:31:04.358 ************************************ 00:31:04.358 START TEST compress_compdev 00:31:04.358 ************************************ 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:31:04.358 * Looking for test storage... 00:31:04.358 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:31:04.358 10:41:17 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.358 10:41:17 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.358 10:41:17 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.358 10:41:17 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.358 10:41:17 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.358 10:41:17 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.358 10:41:17 compress_compdev -- paths/export.sh@5 -- # export PATH 00:31:04.358 10:41:17 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.358 10:41:17 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3545727 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3545727 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3545727 ']' 00:31:04.358 10:41:17 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.358 10:41:17 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:04.618 [2024-07-26 10:41:17.263005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:04.618 [2024-07-26 10:41:17.263069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3545727 ] 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:04.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:04.618 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:04.618 [2024-07-26 10:41:17.384310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:04.618 [2024-07-26 10:41:17.430480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.618 [2024-07-26 10:41:17.430486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.186 [2024-07-26 10:41:18.034536] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:31:05.444 10:41:18 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.444 10:41:18 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:31:05.444 10:41:18 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:31:05.444 10:41:18 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:05.444 10:41:18 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:08.732 [2024-07-26 10:41:21.255828] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1460380 PMD being used: compress_qat 00:31:08.732 10:41:21 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:08.732 10:41:21 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:08.991 [ 00:31:08.991 { 00:31:08.991 "name": "Nvme0n1", 00:31:08.991 "aliases": [ 00:31:08.991 "20996888-6a01-4fda-8da4-84dcdf462a19" 00:31:08.991 ], 00:31:08.991 "product_name": "NVMe disk", 00:31:08.991 "block_size": 512, 00:31:08.991 "num_blocks": 3907029168, 00:31:08.991 "uuid": "20996888-6a01-4fda-8da4-84dcdf462a19", 00:31:08.991 "assigned_rate_limits": { 00:31:08.991 "rw_ios_per_sec": 0, 00:31:08.991 "rw_mbytes_per_sec": 0, 00:31:08.991 "r_mbytes_per_sec": 0, 00:31:08.991 "w_mbytes_per_sec": 0 00:31:08.991 }, 00:31:08.991 "claimed": false, 00:31:08.991 "zoned": false, 00:31:08.991 "supported_io_types": { 00:31:08.991 "read": true, 00:31:08.991 "write": true, 00:31:08.991 "unmap": true, 00:31:08.991 "flush": true, 00:31:08.991 "reset": true, 00:31:08.991 "nvme_admin": true, 00:31:08.991 "nvme_io": true, 00:31:08.991 "nvme_io_md": false, 00:31:08.991 "write_zeroes": true, 00:31:08.991 "zcopy": false, 00:31:08.991 "get_zone_info": false, 00:31:08.991 "zone_management": false, 00:31:08.991 "zone_append": false, 00:31:08.991 "compare": false, 00:31:08.991 "compare_and_write": false, 00:31:08.991 "abort": true, 00:31:08.991 "seek_hole": false, 00:31:08.991 "seek_data": false, 00:31:08.991 "copy": false, 00:31:08.991 "nvme_iov_md": false 00:31:08.991 }, 00:31:08.991 "driver_specific": { 00:31:08.991 "nvme": [ 00:31:08.991 { 00:31:08.991 "pci_address": "0000:d8:00.0", 00:31:08.991 "trid": { 00:31:08.991 "trtype": "PCIe", 00:31:08.991 "traddr": "0000:d8:00.0" 00:31:08.991 }, 00:31:08.991 "ctrlr_data": { 00:31:08.991 "cntlid": 0, 00:31:08.991 "vendor_id": "0x8086", 00:31:08.991 "model_number": "INTEL SSDPE2KX020T8", 00:31:08.991 "serial_number": "BTLJ125505KA2P0BGN", 00:31:08.991 "firmware_revision": "VDV10170", 00:31:08.991 "oacs": { 00:31:08.991 "security": 0, 00:31:08.991 "format": 1, 00:31:08.991 "firmware": 1, 00:31:08.991 "ns_manage": 1 00:31:08.991 }, 00:31:08.991 "multi_ctrlr": false, 00:31:08.991 "ana_reporting": false 00:31:08.991 }, 00:31:08.991 "vs": { 00:31:08.991 "nvme_version": "1.2" 00:31:08.991 }, 00:31:08.991 "ns_data": { 00:31:08.991 "id": 1, 00:31:08.991 "can_share": false 00:31:08.991 } 00:31:08.991 } 00:31:08.991 ], 00:31:08.991 "mp_policy": "active_passive" 00:31:08.991 } 00:31:08.991 } 00:31:08.991 ] 00:31:08.991 10:41:21 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:08.991 10:41:21 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:09.250 [2024-07-26 10:41:21.965029] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x108e510 PMD being used: compress_qat 00:31:10.187 64e4a115-100f-4d30-9db6-6d57f205330a 00:31:10.187 10:41:22 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:10.754 8edcce40-403c-4655-92cd-e23a172bd35d 00:31:10.754 10:41:23 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:10.754 10:41:23 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.011 10:41:23 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:11.011 [ 00:31:11.011 { 00:31:11.011 "name": "8edcce40-403c-4655-92cd-e23a172bd35d", 00:31:11.011 "aliases": [ 00:31:11.011 "lvs0/lv0" 00:31:11.011 ], 00:31:11.011 "product_name": "Logical Volume", 00:31:11.011 "block_size": 512, 00:31:11.011 "num_blocks": 204800, 00:31:11.011 "uuid": "8edcce40-403c-4655-92cd-e23a172bd35d", 00:31:11.011 "assigned_rate_limits": { 00:31:11.011 "rw_ios_per_sec": 0, 00:31:11.011 "rw_mbytes_per_sec": 0, 00:31:11.011 "r_mbytes_per_sec": 0, 00:31:11.011 "w_mbytes_per_sec": 0 00:31:11.011 }, 00:31:11.011 "claimed": false, 00:31:11.011 "zoned": false, 00:31:11.011 "supported_io_types": { 00:31:11.011 "read": true, 00:31:11.011 "write": true, 00:31:11.011 "unmap": true, 00:31:11.011 "flush": false, 00:31:11.011 "reset": true, 00:31:11.011 "nvme_admin": false, 00:31:11.011 "nvme_io": false, 00:31:11.011 "nvme_io_md": false, 00:31:11.011 "write_zeroes": true, 00:31:11.011 "zcopy": false, 00:31:11.011 "get_zone_info": false, 00:31:11.011 "zone_management": false, 00:31:11.011 "zone_append": false, 00:31:11.011 "compare": false, 00:31:11.011 "compare_and_write": false, 00:31:11.011 "abort": false, 00:31:11.011 "seek_hole": true, 00:31:11.011 "seek_data": true, 00:31:11.011 "copy": false, 00:31:11.011 "nvme_iov_md": false 00:31:11.011 }, 00:31:11.011 "driver_specific": { 00:31:11.011 "lvol": { 00:31:11.011 "lvol_store_uuid": "64e4a115-100f-4d30-9db6-6d57f205330a", 00:31:11.011 "base_bdev": "Nvme0n1", 00:31:11.011 "thin_provision": true, 00:31:11.011 "num_allocated_clusters": 0, 00:31:11.011 "snapshot": false, 00:31:11.011 "clone": false, 00:31:11.011 "esnap_clone": false 00:31:11.011 } 00:31:11.011 } 00:31:11.011 } 00:31:11.011 ] 00:31:11.011 10:41:23 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:11.011 10:41:23 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:11.011 10:41:23 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:11.268 [2024-07-26 10:41:24.119828] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:11.268 COMP_lvs0/lv0 00:31:11.268 10:41:24 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:11.268 10:41:24 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:11.527 10:41:24 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:11.786 [ 00:31:11.786 { 00:31:11.786 "name": "COMP_lvs0/lv0", 00:31:11.786 "aliases": [ 00:31:11.786 "ed54fe2c-8eae-505a-811b-10f7c86df1af" 00:31:11.786 ], 00:31:11.786 "product_name": "compress", 00:31:11.786 "block_size": 512, 00:31:11.786 "num_blocks": 200704, 00:31:11.786 "uuid": "ed54fe2c-8eae-505a-811b-10f7c86df1af", 00:31:11.786 "assigned_rate_limits": { 00:31:11.786 "rw_ios_per_sec": 0, 00:31:11.786 "rw_mbytes_per_sec": 0, 00:31:11.786 "r_mbytes_per_sec": 0, 00:31:11.786 "w_mbytes_per_sec": 0 00:31:11.786 }, 00:31:11.786 "claimed": false, 00:31:11.786 "zoned": false, 00:31:11.786 "supported_io_types": { 00:31:11.786 "read": true, 00:31:11.786 "write": true, 00:31:11.786 "unmap": false, 00:31:11.786 "flush": false, 00:31:11.786 "reset": false, 00:31:11.786 "nvme_admin": false, 00:31:11.786 "nvme_io": false, 00:31:11.786 "nvme_io_md": false, 00:31:11.786 "write_zeroes": true, 00:31:11.786 "zcopy": false, 00:31:11.786 "get_zone_info": false, 00:31:11.786 "zone_management": false, 00:31:11.786 "zone_append": false, 00:31:11.786 "compare": false, 00:31:11.786 "compare_and_write": false, 00:31:11.786 "abort": false, 00:31:11.786 "seek_hole": false, 00:31:11.786 "seek_data": false, 00:31:11.786 "copy": false, 00:31:11.786 "nvme_iov_md": false 00:31:11.786 }, 00:31:11.786 "driver_specific": { 00:31:11.786 "compress": { 00:31:11.786 "name": "COMP_lvs0/lv0", 00:31:11.786 "base_bdev_name": "8edcce40-403c-4655-92cd-e23a172bd35d", 00:31:11.786 "pm_path": "/tmp/pmem/c04ec8ec-7395-494e-a107-1ae1074af65d" 00:31:11.786 } 00:31:11.786 } 00:31:11.786 } 00:31:11.786 ] 00:31:11.786 10:41:24 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:11.786 10:41:24 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:12.044 [2024-07-26 10:41:24.714079] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7ffa041b15c0 PMD being used: compress_qat 00:31:12.044 [2024-07-26 10:41:24.716169] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1342410 PMD being used: compress_qat 00:31:12.044 Running I/O for 3 seconds... 00:31:15.326 00:31:15.327 Latency(us) 00:31:15.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.327 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:15.327 Verification LBA range: start 0x0 length 0x3100 00:31:15.327 COMP_lvs0/lv0 : 3.02 4105.76 16.04 0.00 0.00 7733.72 131.07 15623.78 00:31:15.327 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:15.327 Verification LBA range: start 0x3100 length 0x3100 00:31:15.327 COMP_lvs0/lv0 : 3.01 4226.44 16.51 0.00 0.00 7524.94 118.78 16567.50 00:31:15.327 =================================================================================================================== 00:31:15.327 Total : 8332.20 32.55 0.00 0.00 7627.83 118.78 16567.50 00:31:15.327 0 00:31:15.327 10:41:27 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:31:15.327 10:41:27 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:15.327 10:41:27 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:15.327 10:41:28 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:15.327 10:41:28 compress_compdev -- compress/compress.sh@78 -- # killprocess 3545727 00:31:15.327 10:41:28 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3545727 ']' 00:31:15.327 10:41:28 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3545727 00:31:15.327 10:41:28 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:31:15.327 10:41:28 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.327 10:41:28 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3545727 00:31:15.585 10:41:28 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:15.585 10:41:28 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:15.585 10:41:28 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3545727' 00:31:15.585 killing process with pid 3545727 00:31:15.585 10:41:28 compress_compdev -- common/autotest_common.sh@969 -- # kill 3545727 00:31:15.585 Received shutdown signal, test time was about 3.000000 seconds 00:31:15.585 00:31:15.585 Latency(us) 00:31:15.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.585 =================================================================================================================== 00:31:15.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:15.585 10:41:28 compress_compdev -- common/autotest_common.sh@974 -- # wait 3545727 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3547864 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:31:18.121 10:41:30 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3547864 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3547864 ']' 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:18.121 10:41:30 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:18.121 [2024-07-26 10:41:30.615782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:18.121 [2024-07-26 10:41:30.615845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547864 ] 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:18.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:18.122 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:18.122 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:18.122 [2024-07-26 10:41:30.737116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.122 [2024-07-26 10:41:30.782362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.122 [2024-07-26 10:41:30.782369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.689 [2024-07-26 10:41:31.362956] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:31:18.689 10:41:31 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:18.689 10:41:31 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:31:18.689 10:41:31 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:31:18.689 10:41:31 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:18.689 10:41:31 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:22.008 [2024-07-26 10:41:34.607617] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e5c380 PMD being used: compress_qat 00:31:22.008 10:41:34 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:22.008 10:41:34 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:22.266 [ 00:31:22.266 { 00:31:22.266 "name": "Nvme0n1", 00:31:22.266 "aliases": [ 00:31:22.266 "ea0b4153-9bb4-4c06-a45a-419865e7cc92" 00:31:22.266 ], 00:31:22.266 "product_name": "NVMe disk", 00:31:22.266 "block_size": 512, 00:31:22.266 "num_blocks": 3907029168, 00:31:22.266 "uuid": "ea0b4153-9bb4-4c06-a45a-419865e7cc92", 00:31:22.266 "assigned_rate_limits": { 00:31:22.266 "rw_ios_per_sec": 0, 00:31:22.266 "rw_mbytes_per_sec": 0, 00:31:22.266 "r_mbytes_per_sec": 0, 00:31:22.266 "w_mbytes_per_sec": 0 00:31:22.266 }, 00:31:22.266 "claimed": false, 00:31:22.267 "zoned": false, 00:31:22.267 "supported_io_types": { 00:31:22.267 "read": true, 00:31:22.267 "write": true, 00:31:22.267 "unmap": true, 00:31:22.267 "flush": true, 00:31:22.267 "reset": true, 00:31:22.267 "nvme_admin": true, 00:31:22.267 "nvme_io": true, 00:31:22.267 "nvme_io_md": false, 00:31:22.267 "write_zeroes": true, 00:31:22.267 "zcopy": false, 00:31:22.267 "get_zone_info": false, 00:31:22.267 "zone_management": false, 00:31:22.267 "zone_append": false, 00:31:22.267 "compare": false, 00:31:22.267 "compare_and_write": false, 00:31:22.267 "abort": true, 00:31:22.267 "seek_hole": false, 00:31:22.267 "seek_data": false, 00:31:22.267 "copy": false, 00:31:22.267 "nvme_iov_md": false 00:31:22.267 }, 00:31:22.267 "driver_specific": { 00:31:22.267 "nvme": [ 00:31:22.267 { 00:31:22.267 "pci_address": "0000:d8:00.0", 00:31:22.267 "trid": { 00:31:22.267 "trtype": "PCIe", 00:31:22.267 "traddr": "0000:d8:00.0" 00:31:22.267 }, 00:31:22.267 "ctrlr_data": { 00:31:22.267 "cntlid": 0, 00:31:22.267 "vendor_id": "0x8086", 00:31:22.267 "model_number": "INTEL SSDPE2KX020T8", 00:31:22.267 "serial_number": "BTLJ125505KA2P0BGN", 00:31:22.267 "firmware_revision": "VDV10170", 00:31:22.267 "oacs": { 00:31:22.267 "security": 0, 00:31:22.267 "format": 1, 00:31:22.267 "firmware": 1, 00:31:22.267 "ns_manage": 1 00:31:22.267 }, 00:31:22.267 "multi_ctrlr": false, 00:31:22.267 "ana_reporting": false 00:31:22.267 }, 00:31:22.267 "vs": { 00:31:22.267 "nvme_version": "1.2" 00:31:22.267 }, 00:31:22.267 "ns_data": { 00:31:22.267 "id": 1, 00:31:22.267 "can_share": false 00:31:22.267 } 00:31:22.267 } 00:31:22.267 ], 00:31:22.267 "mp_policy": "active_passive" 00:31:22.267 } 00:31:22.267 } 00:31:22.267 ] 00:31:22.267 10:41:35 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:22.267 10:41:35 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:22.834 [2024-07-26 10:41:35.589591] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cab2e0 PMD being used: compress_qat 00:31:23.770 ba497aea-a419-4c9a-8308-5cf66c67c46f 00:31:23.770 10:41:36 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:24.029 8fc3afcc-8679-424a-af71-a2678f040d49 00:31:24.029 10:41:36 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:24.029 10:41:36 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:24.287 10:41:37 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:24.547 [ 00:31:24.547 { 00:31:24.547 "name": "8fc3afcc-8679-424a-af71-a2678f040d49", 00:31:24.547 "aliases": [ 00:31:24.547 "lvs0/lv0" 00:31:24.547 ], 00:31:24.547 "product_name": "Logical Volume", 00:31:24.547 "block_size": 512, 00:31:24.547 "num_blocks": 204800, 00:31:24.547 "uuid": "8fc3afcc-8679-424a-af71-a2678f040d49", 00:31:24.547 "assigned_rate_limits": { 00:31:24.547 "rw_ios_per_sec": 0, 00:31:24.547 "rw_mbytes_per_sec": 0, 00:31:24.547 "r_mbytes_per_sec": 0, 00:31:24.547 "w_mbytes_per_sec": 0 00:31:24.547 }, 00:31:24.547 "claimed": false, 00:31:24.547 "zoned": false, 00:31:24.547 "supported_io_types": { 00:31:24.547 "read": true, 00:31:24.547 "write": true, 00:31:24.547 "unmap": true, 00:31:24.547 "flush": false, 00:31:24.547 "reset": true, 00:31:24.547 "nvme_admin": false, 00:31:24.547 "nvme_io": false, 00:31:24.547 "nvme_io_md": false, 00:31:24.547 "write_zeroes": true, 00:31:24.547 "zcopy": false, 00:31:24.547 "get_zone_info": false, 00:31:24.547 "zone_management": false, 00:31:24.547 "zone_append": false, 00:31:24.547 "compare": false, 00:31:24.547 "compare_and_write": false, 00:31:24.547 "abort": false, 00:31:24.547 "seek_hole": true, 00:31:24.547 "seek_data": true, 00:31:24.547 "copy": false, 00:31:24.547 "nvme_iov_md": false 00:31:24.547 }, 00:31:24.547 "driver_specific": { 00:31:24.547 "lvol": { 00:31:24.547 "lvol_store_uuid": "ba497aea-a419-4c9a-8308-5cf66c67c46f", 00:31:24.547 "base_bdev": "Nvme0n1", 00:31:24.547 "thin_provision": true, 00:31:24.547 "num_allocated_clusters": 0, 00:31:24.547 "snapshot": false, 00:31:24.547 "clone": false, 00:31:24.547 "esnap_clone": false 00:31:24.547 } 00:31:24.547 } 00:31:24.547 } 00:31:24.547 ] 00:31:24.547 10:41:37 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:24.547 10:41:37 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:31:24.547 10:41:37 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:31:24.807 [2024-07-26 10:41:37.511963] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:24.807 COMP_lvs0/lv0 00:31:24.807 10:41:37 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:24.807 10:41:37 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:25.066 10:41:37 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:25.066 [ 00:31:25.066 { 00:31:25.066 "name": "COMP_lvs0/lv0", 00:31:25.066 "aliases": [ 00:31:25.066 "984083ac-9c72-5a95-bf86-ceb398bca0f7" 00:31:25.066 ], 00:31:25.066 "product_name": "compress", 00:31:25.066 "block_size": 512, 00:31:25.066 "num_blocks": 200704, 00:31:25.066 "uuid": "984083ac-9c72-5a95-bf86-ceb398bca0f7", 00:31:25.066 "assigned_rate_limits": { 00:31:25.066 "rw_ios_per_sec": 0, 00:31:25.066 "rw_mbytes_per_sec": 0, 00:31:25.066 "r_mbytes_per_sec": 0, 00:31:25.066 "w_mbytes_per_sec": 0 00:31:25.066 }, 00:31:25.066 "claimed": false, 00:31:25.066 "zoned": false, 00:31:25.066 "supported_io_types": { 00:31:25.066 "read": true, 00:31:25.066 "write": true, 00:31:25.066 "unmap": false, 00:31:25.066 "flush": false, 00:31:25.066 "reset": false, 00:31:25.066 "nvme_admin": false, 00:31:25.066 "nvme_io": false, 00:31:25.066 "nvme_io_md": false, 00:31:25.066 "write_zeroes": true, 00:31:25.066 "zcopy": false, 00:31:25.066 "get_zone_info": false, 00:31:25.066 "zone_management": false, 00:31:25.066 "zone_append": false, 00:31:25.066 "compare": false, 00:31:25.066 "compare_and_write": false, 00:31:25.066 "abort": false, 00:31:25.066 "seek_hole": false, 00:31:25.066 "seek_data": false, 00:31:25.066 "copy": false, 00:31:25.066 "nvme_iov_md": false 00:31:25.066 }, 00:31:25.066 "driver_specific": { 00:31:25.066 "compress": { 00:31:25.066 "name": "COMP_lvs0/lv0", 00:31:25.066 "base_bdev_name": "8fc3afcc-8679-424a-af71-a2678f040d49", 00:31:25.066 "pm_path": "/tmp/pmem/79680054-d041-4235-899b-ed81207d5014" 00:31:25.066 } 00:31:25.066 } 00:31:25.066 } 00:31:25.066 ] 00:31:25.325 10:41:37 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:25.325 10:41:37 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:25.325 [2024-07-26 10:41:38.074058] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fd6b41b15c0 PMD being used: compress_qat 00:31:25.325 [2024-07-26 10:41:38.076110] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d366a0 PMD being used: compress_qat 00:31:25.325 Running I/O for 3 seconds... 00:31:28.616 00:31:28.616 Latency(us) 00:31:28.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.616 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:28.616 Verification LBA range: start 0x0 length 0x3100 00:31:28.616 COMP_lvs0/lv0 : 3.01 4045.64 15.80 0.00 0.00 7860.57 129.43 13946.06 00:31:28.616 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:28.616 Verification LBA range: start 0x3100 length 0x3100 00:31:28.616 COMP_lvs0/lv0 : 3.01 4176.35 16.31 0.00 0.00 7624.64 120.42 13002.34 00:31:28.616 =================================================================================================================== 00:31:28.616 Total : 8221.99 32.12 0.00 0.00 7740.72 120.42 13946.06 00:31:28.616 0 00:31:28.616 10:41:41 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:31:28.616 10:41:41 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:28.616 10:41:41 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:28.875 10:41:41 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:28.875 10:41:41 compress_compdev -- compress/compress.sh@78 -- # killprocess 3547864 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3547864 ']' 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3547864 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3547864 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3547864' 00:31:28.875 killing process with pid 3547864 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@969 -- # kill 3547864 00:31:28.875 Received shutdown signal, test time was about 3.000000 seconds 00:31:28.875 00:31:28.875 Latency(us) 00:31:28.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.875 =================================================================================================================== 00:31:28.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.875 10:41:41 compress_compdev -- common/autotest_common.sh@974 -- # wait 3547864 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3550063 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:31:31.409 10:41:44 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3550063 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3550063 ']' 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.409 10:41:44 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:31.409 [2024-07-26 10:41:44.125179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:31.409 [2024-07-26 10:41:44.125243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550063 ] 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.409 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:31.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:31.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:31.410 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:31.410 [2024-07-26 10:41:44.248194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:31.410 [2024-07-26 10:41:44.292676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.410 [2024-07-26 10:41:44.292681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.979 [2024-07-26 10:41:44.880452] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:31:32.238 10:41:44 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.238 10:41:44 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:31:32.238 10:41:44 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:31:32.238 10:41:44 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:32.238 10:41:44 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:35.532 [2024-07-26 10:41:48.007872] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14b1380 PMD being used: compress_qat 00:31:35.532 10:41:48 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:35.532 [ 00:31:35.532 { 00:31:35.532 "name": "Nvme0n1", 00:31:35.532 "aliases": [ 00:31:35.532 "d4b00d94-4803-49eb-bf76-cbda250f0bd6" 00:31:35.532 ], 00:31:35.532 "product_name": "NVMe disk", 00:31:35.532 "block_size": 512, 00:31:35.532 "num_blocks": 3907029168, 00:31:35.532 "uuid": "d4b00d94-4803-49eb-bf76-cbda250f0bd6", 00:31:35.532 "assigned_rate_limits": { 00:31:35.532 "rw_ios_per_sec": 0, 00:31:35.532 "rw_mbytes_per_sec": 0, 00:31:35.532 "r_mbytes_per_sec": 0, 00:31:35.532 "w_mbytes_per_sec": 0 00:31:35.532 }, 00:31:35.532 "claimed": false, 00:31:35.532 "zoned": false, 00:31:35.532 "supported_io_types": { 00:31:35.532 "read": true, 00:31:35.532 "write": true, 00:31:35.532 "unmap": true, 00:31:35.532 "flush": true, 00:31:35.532 "reset": true, 00:31:35.532 "nvme_admin": true, 00:31:35.532 "nvme_io": true, 00:31:35.532 "nvme_io_md": false, 00:31:35.532 "write_zeroes": true, 00:31:35.532 "zcopy": false, 00:31:35.532 "get_zone_info": false, 00:31:35.532 "zone_management": false, 00:31:35.532 "zone_append": false, 00:31:35.532 "compare": false, 00:31:35.532 "compare_and_write": false, 00:31:35.532 "abort": true, 00:31:35.532 "seek_hole": false, 00:31:35.532 "seek_data": false, 00:31:35.532 "copy": false, 00:31:35.532 "nvme_iov_md": false 00:31:35.532 }, 00:31:35.532 "driver_specific": { 00:31:35.532 "nvme": [ 00:31:35.532 { 00:31:35.532 "pci_address": "0000:d8:00.0", 00:31:35.532 "trid": { 00:31:35.532 "trtype": "PCIe", 00:31:35.532 "traddr": "0000:d8:00.0" 00:31:35.532 }, 00:31:35.532 "ctrlr_data": { 00:31:35.532 "cntlid": 0, 00:31:35.532 "vendor_id": "0x8086", 00:31:35.532 "model_number": "INTEL SSDPE2KX020T8", 00:31:35.532 "serial_number": "BTLJ125505KA2P0BGN", 00:31:35.532 "firmware_revision": "VDV10170", 00:31:35.532 "oacs": { 00:31:35.532 "security": 0, 00:31:35.532 "format": 1, 00:31:35.532 "firmware": 1, 00:31:35.532 "ns_manage": 1 00:31:35.532 }, 00:31:35.532 "multi_ctrlr": false, 00:31:35.532 "ana_reporting": false 00:31:35.532 }, 00:31:35.532 "vs": { 00:31:35.532 "nvme_version": "1.2" 00:31:35.532 }, 00:31:35.532 "ns_data": { 00:31:35.532 "id": 1, 00:31:35.532 "can_share": false 00:31:35.532 } 00:31:35.532 } 00:31:35.532 ], 00:31:35.532 "mp_policy": "active_passive" 00:31:35.532 } 00:31:35.532 } 00:31:35.532 ] 00:31:35.532 10:41:48 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:35.532 10:41:48 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:35.791 [2024-07-26 10:41:48.519973] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10df510 PMD being used: compress_qat 00:31:36.727 bad7abac-530f-430b-9a19-f36a3be0ed4c 00:31:36.727 10:41:49 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:36.987 fcfb589e-b0f6-4167-8299-35bb53395ae9 00:31:36.987 10:41:49 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:36.987 10:41:49 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:37.247 [ 00:31:37.247 { 00:31:37.247 "name": "fcfb589e-b0f6-4167-8299-35bb53395ae9", 00:31:37.247 "aliases": [ 00:31:37.247 "lvs0/lv0" 00:31:37.247 ], 00:31:37.247 "product_name": "Logical Volume", 00:31:37.247 "block_size": 512, 00:31:37.247 "num_blocks": 204800, 00:31:37.247 "uuid": "fcfb589e-b0f6-4167-8299-35bb53395ae9", 00:31:37.247 "assigned_rate_limits": { 00:31:37.247 "rw_ios_per_sec": 0, 00:31:37.247 "rw_mbytes_per_sec": 0, 00:31:37.247 "r_mbytes_per_sec": 0, 00:31:37.247 "w_mbytes_per_sec": 0 00:31:37.247 }, 00:31:37.247 "claimed": false, 00:31:37.247 "zoned": false, 00:31:37.247 "supported_io_types": { 00:31:37.247 "read": true, 00:31:37.247 "write": true, 00:31:37.247 "unmap": true, 00:31:37.247 "flush": false, 00:31:37.247 "reset": true, 00:31:37.247 "nvme_admin": false, 00:31:37.247 "nvme_io": false, 00:31:37.247 "nvme_io_md": false, 00:31:37.247 "write_zeroes": true, 00:31:37.247 "zcopy": false, 00:31:37.247 "get_zone_info": false, 00:31:37.247 "zone_management": false, 00:31:37.247 "zone_append": false, 00:31:37.247 "compare": false, 00:31:37.247 "compare_and_write": false, 00:31:37.247 "abort": false, 00:31:37.247 "seek_hole": true, 00:31:37.247 "seek_data": true, 00:31:37.247 "copy": false, 00:31:37.247 "nvme_iov_md": false 00:31:37.247 }, 00:31:37.247 "driver_specific": { 00:31:37.247 "lvol": { 00:31:37.247 "lvol_store_uuid": "bad7abac-530f-430b-9a19-f36a3be0ed4c", 00:31:37.247 "base_bdev": "Nvme0n1", 00:31:37.247 "thin_provision": true, 00:31:37.247 "num_allocated_clusters": 0, 00:31:37.247 "snapshot": false, 00:31:37.247 "clone": false, 00:31:37.247 "esnap_clone": false 00:31:37.247 } 00:31:37.247 } 00:31:37.247 } 00:31:37.247 ] 00:31:37.247 10:41:50 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:37.247 10:41:50 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:31:37.247 10:41:50 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:31:37.506 [2024-07-26 10:41:50.251742] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:37.506 COMP_lvs0/lv0 00:31:37.506 10:41:50 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:37.506 10:41:50 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:37.765 10:41:50 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:37.765 [ 00:31:37.765 { 00:31:37.765 "name": "COMP_lvs0/lv0", 00:31:37.765 "aliases": [ 00:31:37.765 "2cca3a72-4e6f-5eb8-82d1-aa9b62f8d3a9" 00:31:37.765 ], 00:31:37.765 "product_name": "compress", 00:31:37.765 "block_size": 4096, 00:31:37.765 "num_blocks": 25088, 00:31:37.765 "uuid": "2cca3a72-4e6f-5eb8-82d1-aa9b62f8d3a9", 00:31:37.765 "assigned_rate_limits": { 00:31:37.765 "rw_ios_per_sec": 0, 00:31:37.765 "rw_mbytes_per_sec": 0, 00:31:37.765 "r_mbytes_per_sec": 0, 00:31:37.765 "w_mbytes_per_sec": 0 00:31:37.765 }, 00:31:37.765 "claimed": false, 00:31:37.765 "zoned": false, 00:31:37.765 "supported_io_types": { 00:31:37.765 "read": true, 00:31:37.765 "write": true, 00:31:37.765 "unmap": false, 00:31:37.765 "flush": false, 00:31:37.765 "reset": false, 00:31:37.765 "nvme_admin": false, 00:31:37.765 "nvme_io": false, 00:31:37.765 "nvme_io_md": false, 00:31:37.765 "write_zeroes": true, 00:31:37.765 "zcopy": false, 00:31:37.765 "get_zone_info": false, 00:31:37.765 "zone_management": false, 00:31:37.765 "zone_append": false, 00:31:37.765 "compare": false, 00:31:37.765 "compare_and_write": false, 00:31:37.765 "abort": false, 00:31:37.765 "seek_hole": false, 00:31:37.765 "seek_data": false, 00:31:37.765 "copy": false, 00:31:37.765 "nvme_iov_md": false 00:31:37.765 }, 00:31:37.765 "driver_specific": { 00:31:37.765 "compress": { 00:31:37.766 "name": "COMP_lvs0/lv0", 00:31:37.766 "base_bdev_name": "fcfb589e-b0f6-4167-8299-35bb53395ae9", 00:31:37.766 "pm_path": "/tmp/pmem/9353387b-7bf1-423a-9a93-4b2a542eb184" 00:31:37.766 } 00:31:37.766 } 00:31:37.766 } 00:31:37.766 ] 00:31:37.766 10:41:50 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:37.766 10:41:50 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:38.025 [2024-07-26 10:41:50.685362] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f08e01b15c0 PMD being used: compress_qat 00:31:38.025 [2024-07-26 10:41:50.687397] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1393690 PMD being used: compress_qat 00:31:38.025 Running I/O for 3 seconds... 00:31:41.353 00:31:41.353 Latency(us) 00:31:41.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.353 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:31:41.353 Verification LBA range: start 0x0 length 0x3100 00:31:41.353 COMP_lvs0/lv0 : 3.01 3748.18 14.64 0.00 0.00 8476.19 174.49 16252.93 00:31:41.353 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:31:41.353 Verification LBA range: start 0x3100 length 0x3100 00:31:41.353 COMP_lvs0/lv0 : 3.01 3813.46 14.90 0.00 0.00 8345.70 165.48 16462.64 00:31:41.353 =================================================================================================================== 00:31:41.353 Total : 7561.64 29.54 0.00 0.00 8410.41 165.48 16462.64 00:31:41.353 0 00:31:41.353 10:41:53 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:31:41.353 10:41:53 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:41.353 10:41:53 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:41.353 10:41:54 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:41.353 10:41:54 compress_compdev -- compress/compress.sh@78 -- # killprocess 3550063 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3550063 ']' 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3550063 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550063 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550063' 00:31:41.353 killing process with pid 3550063 00:31:41.353 10:41:54 compress_compdev -- common/autotest_common.sh@969 -- # kill 3550063 00:31:41.353 Received shutdown signal, test time was about 3.000000 seconds 00:31:41.353 00:31:41.353 Latency(us) 00:31:41.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.354 =================================================================================================================== 00:31:41.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.354 10:41:54 compress_compdev -- common/autotest_common.sh@974 -- # wait 3550063 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=3552128 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:31:43.885 10:41:56 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 3552128 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3552128 ']' 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:43.885 10:41:56 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:43.885 [2024-07-26 10:41:56.747455] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:43.885 [2024-07-26 10:41:56.747529] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552128 ] 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.144 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:44.144 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:44.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.145 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:44.145 [2024-07-26 10:41:56.879819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:44.145 [2024-07-26 10:41:56.926959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.145 [2024-07-26 10:41:56.927052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.145 [2024-07-26 10:41:56.927053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.712 [2024-07-26 10:41:57.516825] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:31:44.970 10:41:57 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.970 10:41:57 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:31:44.970 10:41:57 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:31:44.970 10:41:57 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:44.970 10:41:57 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:48.258 [2024-07-26 10:42:00.752364] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1878f70 PMD being used: compress_qat 00:31:48.258 10:42:00 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:48.258 10:42:00 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:48.258 10:42:01 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:48.517 [ 00:31:48.517 { 00:31:48.517 "name": "Nvme0n1", 00:31:48.517 "aliases": [ 00:31:48.517 "9a3d3e50-e98c-4b57-ab06-6c851765fc51" 00:31:48.517 ], 00:31:48.517 "product_name": "NVMe disk", 00:31:48.517 "block_size": 512, 00:31:48.517 "num_blocks": 3907029168, 00:31:48.517 "uuid": "9a3d3e50-e98c-4b57-ab06-6c851765fc51", 00:31:48.518 "assigned_rate_limits": { 00:31:48.518 "rw_ios_per_sec": 0, 00:31:48.518 "rw_mbytes_per_sec": 0, 00:31:48.518 "r_mbytes_per_sec": 0, 00:31:48.518 "w_mbytes_per_sec": 0 00:31:48.518 }, 00:31:48.518 "claimed": false, 00:31:48.518 "zoned": false, 00:31:48.518 "supported_io_types": { 00:31:48.518 "read": true, 00:31:48.518 "write": true, 00:31:48.518 "unmap": true, 00:31:48.518 "flush": true, 00:31:48.518 "reset": true, 00:31:48.518 "nvme_admin": true, 00:31:48.518 "nvme_io": true, 00:31:48.518 "nvme_io_md": false, 00:31:48.518 "write_zeroes": true, 00:31:48.518 "zcopy": false, 00:31:48.518 "get_zone_info": false, 00:31:48.518 "zone_management": false, 00:31:48.518 "zone_append": false, 00:31:48.518 "compare": false, 00:31:48.518 "compare_and_write": false, 00:31:48.518 "abort": true, 00:31:48.518 "seek_hole": false, 00:31:48.518 "seek_data": false, 00:31:48.518 "copy": false, 00:31:48.518 "nvme_iov_md": false 00:31:48.518 }, 00:31:48.518 "driver_specific": { 00:31:48.518 "nvme": [ 00:31:48.518 { 00:31:48.518 "pci_address": "0000:d8:00.0", 00:31:48.518 "trid": { 00:31:48.518 "trtype": "PCIe", 00:31:48.518 "traddr": "0000:d8:00.0" 00:31:48.518 }, 00:31:48.518 "ctrlr_data": { 00:31:48.518 "cntlid": 0, 00:31:48.518 "vendor_id": "0x8086", 00:31:48.518 "model_number": "INTEL SSDPE2KX020T8", 00:31:48.518 "serial_number": "BTLJ125505KA2P0BGN", 00:31:48.518 "firmware_revision": "VDV10170", 00:31:48.518 "oacs": { 00:31:48.518 "security": 0, 00:31:48.518 "format": 1, 00:31:48.518 "firmware": 1, 00:31:48.518 "ns_manage": 1 00:31:48.518 }, 00:31:48.518 "multi_ctrlr": false, 00:31:48.518 "ana_reporting": false 00:31:48.518 }, 00:31:48.518 "vs": { 00:31:48.518 "nvme_version": "1.2" 00:31:48.518 }, 00:31:48.518 "ns_data": { 00:31:48.518 "id": 1, 00:31:48.518 "can_share": false 00:31:48.518 } 00:31:48.518 } 00:31:48.518 ], 00:31:48.518 "mp_policy": "active_passive" 00:31:48.518 } 00:31:48.518 } 00:31:48.518 ] 00:31:48.518 10:42:01 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:48.518 10:42:01 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:48.518 [2024-07-26 10:42:01.413945] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1879e80 PMD being used: compress_qat 00:31:49.897 3dddd1bb-ee05-4963-b521-2bbd3073c4d1 00:31:49.897 10:42:02 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:31:49.897 1e07c21c-76a4-4ca3-81f6-a20fc5580516 00:31:49.897 10:42:02 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:49.897 10:42:02 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:50.156 10:42:02 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:31:50.415 [ 00:31:50.415 { 00:31:50.415 "name": "1e07c21c-76a4-4ca3-81f6-a20fc5580516", 00:31:50.415 "aliases": [ 00:31:50.415 "lvs0/lv0" 00:31:50.415 ], 00:31:50.415 "product_name": "Logical Volume", 00:31:50.415 "block_size": 512, 00:31:50.415 "num_blocks": 204800, 00:31:50.415 "uuid": "1e07c21c-76a4-4ca3-81f6-a20fc5580516", 00:31:50.415 "assigned_rate_limits": { 00:31:50.415 "rw_ios_per_sec": 0, 00:31:50.415 "rw_mbytes_per_sec": 0, 00:31:50.415 "r_mbytes_per_sec": 0, 00:31:50.415 "w_mbytes_per_sec": 0 00:31:50.415 }, 00:31:50.415 "claimed": false, 00:31:50.415 "zoned": false, 00:31:50.415 "supported_io_types": { 00:31:50.415 "read": true, 00:31:50.415 "write": true, 00:31:50.415 "unmap": true, 00:31:50.415 "flush": false, 00:31:50.415 "reset": true, 00:31:50.415 "nvme_admin": false, 00:31:50.415 "nvme_io": false, 00:31:50.415 "nvme_io_md": false, 00:31:50.415 "write_zeroes": true, 00:31:50.415 "zcopy": false, 00:31:50.415 "get_zone_info": false, 00:31:50.415 "zone_management": false, 00:31:50.415 "zone_append": false, 00:31:50.415 "compare": false, 00:31:50.415 "compare_and_write": false, 00:31:50.415 "abort": false, 00:31:50.415 "seek_hole": true, 00:31:50.415 "seek_data": true, 00:31:50.415 "copy": false, 00:31:50.415 "nvme_iov_md": false 00:31:50.415 }, 00:31:50.415 "driver_specific": { 00:31:50.415 "lvol": { 00:31:50.415 "lvol_store_uuid": "3dddd1bb-ee05-4963-b521-2bbd3073c4d1", 00:31:50.415 "base_bdev": "Nvme0n1", 00:31:50.415 "thin_provision": true, 00:31:50.415 "num_allocated_clusters": 0, 00:31:50.415 "snapshot": false, 00:31:50.415 "clone": false, 00:31:50.415 "esnap_clone": false 00:31:50.415 } 00:31:50.415 } 00:31:50.415 } 00:31:50.415 ] 00:31:50.415 10:42:03 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:50.415 10:42:03 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:31:50.415 10:42:03 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:31:50.674 [2024-07-26 10:42:03.344726] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:31:50.674 COMP_lvs0/lv0 00:31:50.674 10:42:03 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:50.674 10:42:03 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:50.932 10:42:03 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:31:50.932 [ 00:31:50.932 { 00:31:50.932 "name": "COMP_lvs0/lv0", 00:31:50.932 "aliases": [ 00:31:50.932 "f462b43c-6e0c-5868-8ed6-6fca9777a423" 00:31:50.932 ], 00:31:50.932 "product_name": "compress", 00:31:50.932 "block_size": 512, 00:31:50.932 "num_blocks": 200704, 00:31:50.932 "uuid": "f462b43c-6e0c-5868-8ed6-6fca9777a423", 00:31:50.932 "assigned_rate_limits": { 00:31:50.932 "rw_ios_per_sec": 0, 00:31:50.932 "rw_mbytes_per_sec": 0, 00:31:50.932 "r_mbytes_per_sec": 0, 00:31:50.932 "w_mbytes_per_sec": 0 00:31:50.932 }, 00:31:50.932 "claimed": false, 00:31:50.932 "zoned": false, 00:31:50.932 "supported_io_types": { 00:31:50.932 "read": true, 00:31:50.932 "write": true, 00:31:50.932 "unmap": false, 00:31:50.932 "flush": false, 00:31:50.932 "reset": false, 00:31:50.932 "nvme_admin": false, 00:31:50.932 "nvme_io": false, 00:31:50.932 "nvme_io_md": false, 00:31:50.932 "write_zeroes": true, 00:31:50.932 "zcopy": false, 00:31:50.932 "get_zone_info": false, 00:31:50.932 "zone_management": false, 00:31:50.932 "zone_append": false, 00:31:50.932 "compare": false, 00:31:50.932 "compare_and_write": false, 00:31:50.932 "abort": false, 00:31:50.932 "seek_hole": false, 00:31:50.932 "seek_data": false, 00:31:50.932 "copy": false, 00:31:50.932 "nvme_iov_md": false 00:31:50.932 }, 00:31:50.932 "driver_specific": { 00:31:50.932 "compress": { 00:31:50.932 "name": "COMP_lvs0/lv0", 00:31:50.932 "base_bdev_name": "1e07c21c-76a4-4ca3-81f6-a20fc5580516", 00:31:50.932 "pm_path": "/tmp/pmem/95976682-14a0-4886-9473-09364d2551a4" 00:31:50.932 } 00:31:50.932 } 00:31:50.932 } 00:31:50.932 ] 00:31:51.191 10:42:03 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:51.191 10:42:03 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:51.191 [2024-07-26 10:42:03.934000] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f5f601b1350 PMD being used: compress_qat 00:31:51.191 I/O targets: 00:31:51.191 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:31:51.191 00:31:51.191 00:31:51.191 CUnit - A unit testing framework for C - Version 2.1-3 00:31:51.191 http://cunit.sourceforge.net/ 00:31:51.191 00:31:51.191 00:31:51.191 Suite: bdevio tests on: COMP_lvs0/lv0 00:31:51.191 Test: blockdev write read block ...passed 00:31:51.191 Test: blockdev write zeroes read block ...passed 00:31:51.191 Test: blockdev write zeroes read no split ...passed 00:31:51.191 Test: blockdev write zeroes read split ...passed 00:31:51.191 Test: blockdev write zeroes read split partial ...passed 00:31:51.191 Test: blockdev reset ...[2024-07-26 10:42:03.977559] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:31:51.191 passed 00:31:51.191 Test: blockdev write read 8 blocks ...passed 00:31:51.191 Test: blockdev write read size > 128k ...passed 00:31:51.191 Test: blockdev write read invalid size ...passed 00:31:51.191 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:51.191 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:51.191 Test: blockdev write read max offset ...passed 00:31:51.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:51.191 Test: blockdev writev readv 8 blocks ...passed 00:31:51.191 Test: blockdev writev readv 30 x 1block ...passed 00:31:51.191 Test: blockdev writev readv block ...passed 00:31:51.191 Test: blockdev writev readv size > 128k ...passed 00:31:51.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:51.191 Test: blockdev comparev and writev ...passed 00:31:51.191 Test: blockdev nvme passthru rw ...passed 00:31:51.191 Test: blockdev nvme passthru vendor specific ...passed 00:31:51.191 Test: blockdev nvme admin passthru ...passed 00:31:51.191 Test: blockdev copy ...passed 00:31:51.191 00:31:51.191 Run Summary: Type Total Ran Passed Failed Inactive 00:31:51.191 suites 1 1 n/a 0 0 00:31:51.191 tests 23 23 23 0 0 00:31:51.191 asserts 130 130 130 0 n/a 00:31:51.191 00:31:51.191 Elapsed time = 0.162 seconds 00:31:51.191 0 00:31:51.191 10:42:04 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:31:51.191 10:42:04 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:31:51.450 10:42:04 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:31:51.709 10:42:04 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:31:51.709 10:42:04 compress_compdev -- compress/compress.sh@62 -- # killprocess 3552128 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3552128 ']' 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3552128 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3552128 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3552128' 00:31:51.709 killing process with pid 3552128 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@969 -- # kill 3552128 00:31:51.709 10:42:04 compress_compdev -- common/autotest_common.sh@974 -- # wait 3552128 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3554383 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:54.244 10:42:06 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3554383 00:31:54.245 10:42:06 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3554383 ']' 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:54.245 10:42:06 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:31:54.245 [2024-07-26 10:42:06.918305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:31:54.245 [2024-07-26 10:42:06.918368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554383 ] 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:54.245 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:54.245 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:54.245 [2024-07-26 10:42:07.040385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:54.245 [2024-07-26 10:42:07.086353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.245 [2024-07-26 10:42:07.086354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.813 [2024-07-26 10:42:07.669788] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:31:55.073 10:42:07 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.073 10:42:07 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:31:55.073 10:42:07 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:31:55.073 10:42:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:55.073 10:42:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:58.362 [2024-07-26 10:42:10.903744] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1607380 PMD being used: compress_qat 00:31:58.362 10:42:10 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:58.362 10:42:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:58.363 10:42:11 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:31:58.660 [ 00:31:58.660 { 00:31:58.661 "name": "Nvme0n1", 00:31:58.661 "aliases": [ 00:31:58.661 "dc7b8a8f-15b8-4558-a311-22554d20ec95" 00:31:58.661 ], 00:31:58.661 "product_name": "NVMe disk", 00:31:58.661 "block_size": 512, 00:31:58.661 "num_blocks": 3907029168, 00:31:58.661 "uuid": "dc7b8a8f-15b8-4558-a311-22554d20ec95", 00:31:58.661 "assigned_rate_limits": { 00:31:58.661 "rw_ios_per_sec": 0, 00:31:58.661 "rw_mbytes_per_sec": 0, 00:31:58.661 "r_mbytes_per_sec": 0, 00:31:58.661 "w_mbytes_per_sec": 0 00:31:58.661 }, 00:31:58.661 "claimed": false, 00:31:58.661 "zoned": false, 00:31:58.661 "supported_io_types": { 00:31:58.661 "read": true, 00:31:58.661 "write": true, 00:31:58.661 "unmap": true, 00:31:58.661 "flush": true, 00:31:58.661 "reset": true, 00:31:58.661 "nvme_admin": true, 00:31:58.661 "nvme_io": true, 00:31:58.661 "nvme_io_md": false, 00:31:58.661 "write_zeroes": true, 00:31:58.661 "zcopy": false, 00:31:58.661 "get_zone_info": false, 00:31:58.661 "zone_management": false, 00:31:58.661 "zone_append": false, 00:31:58.661 "compare": false, 00:31:58.661 "compare_and_write": false, 00:31:58.661 "abort": true, 00:31:58.661 "seek_hole": false, 00:31:58.661 "seek_data": false, 00:31:58.661 "copy": false, 00:31:58.661 "nvme_iov_md": false 00:31:58.661 }, 00:31:58.661 "driver_specific": { 00:31:58.661 "nvme": [ 00:31:58.661 { 00:31:58.661 "pci_address": "0000:d8:00.0", 00:31:58.661 "trid": { 00:31:58.661 "trtype": "PCIe", 00:31:58.661 "traddr": "0000:d8:00.0" 00:31:58.661 }, 00:31:58.661 "ctrlr_data": { 00:31:58.661 "cntlid": 0, 00:31:58.661 "vendor_id": "0x8086", 00:31:58.661 "model_number": "INTEL SSDPE2KX020T8", 00:31:58.661 "serial_number": "BTLJ125505KA2P0BGN", 00:31:58.661 "firmware_revision": "VDV10170", 00:31:58.661 "oacs": { 00:31:58.661 "security": 0, 00:31:58.661 "format": 1, 00:31:58.661 "firmware": 1, 00:31:58.661 "ns_manage": 1 00:31:58.661 }, 00:31:58.661 "multi_ctrlr": false, 00:31:58.661 "ana_reporting": false 00:31:58.661 }, 00:31:58.661 "vs": { 00:31:58.661 "nvme_version": "1.2" 00:31:58.661 }, 00:31:58.661 "ns_data": { 00:31:58.661 "id": 1, 00:31:58.661 "can_share": false 00:31:58.661 } 00:31:58.661 } 00:31:58.661 ], 00:31:58.661 "mp_policy": "active_passive" 00:31:58.661 } 00:31:58.661 } 00:31:58.661 ] 00:31:58.661 10:42:11 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:31:58.661 10:42:11 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:31:58.949 [2024-07-26 10:42:11.628967] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1235650 PMD being used: compress_qat 00:31:59.886 1de9b793-0dec-4c21-b166-a00f655f115f 00:31:59.886 10:42:12 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:32:00.145 25f13c6c-e0c0-43ab-aeaa-a797299d8c53 00:32:00.145 10:42:12 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.145 10:42:12 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:00.404 10:42:13 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:32:00.663 [ 00:32:00.663 { 00:32:00.663 "name": "25f13c6c-e0c0-43ab-aeaa-a797299d8c53", 00:32:00.663 "aliases": [ 00:32:00.663 "lvs0/lv0" 00:32:00.663 ], 00:32:00.663 "product_name": "Logical Volume", 00:32:00.663 "block_size": 512, 00:32:00.663 "num_blocks": 204800, 00:32:00.663 "uuid": "25f13c6c-e0c0-43ab-aeaa-a797299d8c53", 00:32:00.663 "assigned_rate_limits": { 00:32:00.663 "rw_ios_per_sec": 0, 00:32:00.663 "rw_mbytes_per_sec": 0, 00:32:00.663 "r_mbytes_per_sec": 0, 00:32:00.663 "w_mbytes_per_sec": 0 00:32:00.663 }, 00:32:00.663 "claimed": false, 00:32:00.663 "zoned": false, 00:32:00.663 "supported_io_types": { 00:32:00.663 "read": true, 00:32:00.663 "write": true, 00:32:00.663 "unmap": true, 00:32:00.663 "flush": false, 00:32:00.663 "reset": true, 00:32:00.663 "nvme_admin": false, 00:32:00.663 "nvme_io": false, 00:32:00.663 "nvme_io_md": false, 00:32:00.663 "write_zeroes": true, 00:32:00.663 "zcopy": false, 00:32:00.663 "get_zone_info": false, 00:32:00.663 "zone_management": false, 00:32:00.663 "zone_append": false, 00:32:00.663 "compare": false, 00:32:00.663 "compare_and_write": false, 00:32:00.663 "abort": false, 00:32:00.663 "seek_hole": true, 00:32:00.663 "seek_data": true, 00:32:00.663 "copy": false, 00:32:00.663 "nvme_iov_md": false 00:32:00.663 }, 00:32:00.663 "driver_specific": { 00:32:00.663 "lvol": { 00:32:00.663 "lvol_store_uuid": "1de9b793-0dec-4c21-b166-a00f655f115f", 00:32:00.663 "base_bdev": "Nvme0n1", 00:32:00.663 "thin_provision": true, 00:32:00.663 "num_allocated_clusters": 0, 00:32:00.663 "snapshot": false, 00:32:00.663 "clone": false, 00:32:00.663 "esnap_clone": false 00:32:00.663 } 00:32:00.663 } 00:32:00.663 } 00:32:00.663 ] 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:32:00.663 10:42:13 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:32:00.663 10:42:13 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:32:00.663 [2024-07-26 10:42:13.541586] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:32:00.663 COMP_lvs0/lv0 00:32:00.663 10:42:13 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.663 10:42:13 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:00.922 10:42:13 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:32:01.180 [ 00:32:01.180 { 00:32:01.180 "name": "COMP_lvs0/lv0", 00:32:01.180 "aliases": [ 00:32:01.180 "22d07102-ec72-533a-b0cb-427502bfb6d9" 00:32:01.180 ], 00:32:01.180 "product_name": "compress", 00:32:01.180 "block_size": 512, 00:32:01.180 "num_blocks": 200704, 00:32:01.180 "uuid": "22d07102-ec72-533a-b0cb-427502bfb6d9", 00:32:01.180 "assigned_rate_limits": { 00:32:01.180 "rw_ios_per_sec": 0, 00:32:01.180 "rw_mbytes_per_sec": 0, 00:32:01.180 "r_mbytes_per_sec": 0, 00:32:01.180 "w_mbytes_per_sec": 0 00:32:01.180 }, 00:32:01.180 "claimed": false, 00:32:01.180 "zoned": false, 00:32:01.180 "supported_io_types": { 00:32:01.180 "read": true, 00:32:01.180 "write": true, 00:32:01.180 "unmap": false, 00:32:01.180 "flush": false, 00:32:01.180 "reset": false, 00:32:01.180 "nvme_admin": false, 00:32:01.180 "nvme_io": false, 00:32:01.180 "nvme_io_md": false, 00:32:01.180 "write_zeroes": true, 00:32:01.180 "zcopy": false, 00:32:01.180 "get_zone_info": false, 00:32:01.180 "zone_management": false, 00:32:01.180 "zone_append": false, 00:32:01.180 "compare": false, 00:32:01.180 "compare_and_write": false, 00:32:01.180 "abort": false, 00:32:01.180 "seek_hole": false, 00:32:01.180 "seek_data": false, 00:32:01.180 "copy": false, 00:32:01.180 "nvme_iov_md": false 00:32:01.180 }, 00:32:01.180 "driver_specific": { 00:32:01.180 "compress": { 00:32:01.180 "name": "COMP_lvs0/lv0", 00:32:01.180 "base_bdev_name": "25f13c6c-e0c0-43ab-aeaa-a797299d8c53", 00:32:01.180 "pm_path": "/tmp/pmem/2d0917ae-4978-4b43-9a42-ea42a9904e3f" 00:32:01.180 } 00:32:01.180 } 00:32:01.180 } 00:32:01.180 ] 00:32:01.180 10:42:14 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:32:01.180 10:42:14 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:01.438 [2024-07-26 10:42:14.131720] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f86881b15c0 PMD being used: compress_qat 00:32:01.438 [2024-07-26 10:42:14.133749] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14e3d90 PMD being used: compress_qat 00:32:01.438 Running I/O for 30 seconds... 00:32:33.513 00:32:33.513 Latency(us) 00:32:33.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.513 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:32:33.513 Verification LBA range: start 0x0 length 0xc40 00:32:33.513 COMP_lvs0/lv0 : 30.01 1817.30 28.40 0.00 0.00 34966.29 206.44 32925.29 00:32:33.513 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:32:33.513 Verification LBA range: start 0xc40 length 0xc40 00:32:33.513 COMP_lvs0/lv0 : 30.00 5426.91 84.80 0.00 0.00 11679.05 133.53 27472.69 00:32:33.513 =================================================================================================================== 00:32:33.513 Total : 7244.21 113.19 0.00 0.00 17521.43 133.53 32925.29 00:32:33.513 0 00:32:33.513 10:42:44 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:32:33.513 10:42:44 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:32:33.513 10:42:44 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:32:33.513 10:42:44 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:32:33.513 10:42:44 compress_compdev -- compress/compress.sh@78 -- # killprocess 3554383 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3554383 ']' 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3554383 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3554383 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3554383' 00:32:33.513 killing process with pid 3554383 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@969 -- # kill 3554383 00:32:33.513 Received shutdown signal, test time was about 30.000000 seconds 00:32:33.513 00:32:33.513 Latency(us) 00:32:33.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.513 =================================================================================================================== 00:32:33.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.513 10:42:44 compress_compdev -- common/autotest_common.sh@974 -- # wait 3554383 00:32:34.450 10:42:47 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:32:34.450 10:42:47 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:32:34.450 10:42:47 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:32:34.450 10:42:47 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.450 10:42:47 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:34.450 10:42:47 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:34.450 Cannot find device "nvmf_tgt_br" 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@155 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:34.450 Cannot find device "nvmf_tgt_br2" 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@156 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:34.450 Cannot find device "nvmf_tgt_br" 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@158 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:34.450 Cannot find device "nvmf_tgt_br2" 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@159 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:34.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@162 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:34.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@163 -- # true 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:34.450 10:42:47 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:34.709 10:42:47 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:34.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:32:34.968 00:32:34.968 --- 10.0.0.2 ping statistics --- 00:32:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.968 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:34.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:34.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:32:34.968 00:32:34.968 --- 10.0.0.3 ping statistics --- 00:32:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.968 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:34.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:32:34.968 00:32:34.968 --- 10.0.0.1 ping statistics --- 00:32:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.968 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.968 10:42:47 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=3561408 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 3561408 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3561408 ']' 00:32:34.968 10:42:47 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.968 10:42:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:32:34.968 [2024-07-26 10:42:47.848078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:32:34.968 [2024-07-26 10:42:47.848147] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:35.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.227 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:35.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:35.228 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:35.228 [2024-07-26 10:42:47.991800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:35.228 [2024-07-26 10:42:48.038539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.228 [2024-07-26 10:42:48.038587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.228 [2024-07-26 10:42:48.038600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.228 [2024-07-26 10:42:48.038611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.228 [2024-07-26 10:42:48.038621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.228 [2024-07-26 10:42:48.038719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.228 [2024-07-26 10:42:48.038814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.228 [2024-07-26 10:42:48.038818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.795 10:42:48 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.795 10:42:48 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:32:35.795 10:42:48 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.795 10:42:48 compress_compdev -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:35.795 10:42:48 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:32:36.053 10:42:48 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.053 10:42:48 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:36.053 10:42:48 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:32:36.053 [2024-07-26 10:42:48.945818] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.312 10:42:48 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:32:36.312 10:42:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:36.312 10:42:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:39.601 10:42:52 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:39.601 10:42:52 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:32:39.861 [ 00:32:39.861 { 00:32:39.861 "name": "Nvme0n1", 00:32:39.861 "aliases": [ 00:32:39.861 "f2c544f6-efb3-4856-8c72-a9ad21425d47" 00:32:39.861 ], 00:32:39.861 "product_name": "NVMe disk", 00:32:39.861 "block_size": 512, 00:32:39.861 "num_blocks": 3907029168, 00:32:39.861 "uuid": "f2c544f6-efb3-4856-8c72-a9ad21425d47", 00:32:39.861 "assigned_rate_limits": { 00:32:39.861 "rw_ios_per_sec": 0, 00:32:39.861 "rw_mbytes_per_sec": 0, 00:32:39.861 "r_mbytes_per_sec": 0, 00:32:39.861 "w_mbytes_per_sec": 0 00:32:39.861 }, 00:32:39.861 "claimed": false, 00:32:39.861 "zoned": false, 00:32:39.861 "supported_io_types": { 00:32:39.861 "read": true, 00:32:39.861 "write": true, 00:32:39.861 "unmap": true, 00:32:39.861 "flush": true, 00:32:39.861 "reset": true, 00:32:39.861 "nvme_admin": true, 00:32:39.861 "nvme_io": true, 00:32:39.861 "nvme_io_md": false, 00:32:39.861 "write_zeroes": true, 00:32:39.861 "zcopy": false, 00:32:39.861 "get_zone_info": false, 00:32:39.861 "zone_management": false, 00:32:39.861 "zone_append": false, 00:32:39.861 "compare": false, 00:32:39.861 "compare_and_write": false, 00:32:39.861 "abort": true, 00:32:39.861 "seek_hole": false, 00:32:39.861 "seek_data": false, 00:32:39.861 "copy": false, 00:32:39.861 "nvme_iov_md": false 00:32:39.861 }, 00:32:39.861 "driver_specific": { 00:32:39.861 "nvme": [ 00:32:39.861 { 00:32:39.861 "pci_address": "0000:d8:00.0", 00:32:39.861 "trid": { 00:32:39.861 "trtype": "PCIe", 00:32:39.861 "traddr": "0000:d8:00.0" 00:32:39.861 }, 00:32:39.861 "ctrlr_data": { 00:32:39.861 "cntlid": 0, 00:32:39.861 "vendor_id": "0x8086", 00:32:39.861 "model_number": "INTEL SSDPE2KX020T8", 00:32:39.861 "serial_number": "BTLJ125505KA2P0BGN", 00:32:39.861 "firmware_revision": "VDV10170", 00:32:39.861 "oacs": { 00:32:39.861 "security": 0, 00:32:39.861 "format": 1, 00:32:39.861 "firmware": 1, 00:32:39.861 "ns_manage": 1 00:32:39.861 }, 00:32:39.861 "multi_ctrlr": false, 00:32:39.861 "ana_reporting": false 00:32:39.861 }, 00:32:39.861 "vs": { 00:32:39.861 "nvme_version": "1.2" 00:32:39.861 }, 00:32:39.861 "ns_data": { 00:32:39.861 "id": 1, 00:32:39.861 "can_share": false 00:32:39.861 } 00:32:39.861 } 00:32:39.861 ], 00:32:39.861 "mp_policy": "active_passive" 00:32:39.861 } 00:32:39.861 } 00:32:39.861 ] 00:32:39.861 10:42:52 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:32:39.861 10:42:52 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:32:40.862 25988a32-4982-40ba-8844-134d9b1f82ff 00:32:40.862 10:42:53 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:32:41.121 0d00e917-80a0-4216-aed4-c176ed49a39c 00:32:41.121 10:42:53 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.121 10:42:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:41.380 10:42:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:32:41.639 [ 00:32:41.639 { 00:32:41.639 "name": "0d00e917-80a0-4216-aed4-c176ed49a39c", 00:32:41.639 "aliases": [ 00:32:41.639 "lvs0/lv0" 00:32:41.639 ], 00:32:41.639 "product_name": "Logical Volume", 00:32:41.639 "block_size": 512, 00:32:41.639 "num_blocks": 204800, 00:32:41.639 "uuid": "0d00e917-80a0-4216-aed4-c176ed49a39c", 00:32:41.639 "assigned_rate_limits": { 00:32:41.639 "rw_ios_per_sec": 0, 00:32:41.639 "rw_mbytes_per_sec": 0, 00:32:41.639 "r_mbytes_per_sec": 0, 00:32:41.639 "w_mbytes_per_sec": 0 00:32:41.639 }, 00:32:41.639 "claimed": false, 00:32:41.639 "zoned": false, 00:32:41.639 "supported_io_types": { 00:32:41.639 "read": true, 00:32:41.639 "write": true, 00:32:41.639 "unmap": true, 00:32:41.639 "flush": false, 00:32:41.639 "reset": true, 00:32:41.639 "nvme_admin": false, 00:32:41.639 "nvme_io": false, 00:32:41.639 "nvme_io_md": false, 00:32:41.639 "write_zeroes": true, 00:32:41.639 "zcopy": false, 00:32:41.639 "get_zone_info": false, 00:32:41.639 "zone_management": false, 00:32:41.639 "zone_append": false, 00:32:41.639 "compare": false, 00:32:41.639 "compare_and_write": false, 00:32:41.639 "abort": false, 00:32:41.639 "seek_hole": true, 00:32:41.639 "seek_data": true, 00:32:41.639 "copy": false, 00:32:41.639 "nvme_iov_md": false 00:32:41.639 }, 00:32:41.639 "driver_specific": { 00:32:41.639 "lvol": { 00:32:41.639 "lvol_store_uuid": "25988a32-4982-40ba-8844-134d9b1f82ff", 00:32:41.639 "base_bdev": "Nvme0n1", 00:32:41.639 "thin_provision": true, 00:32:41.639 "num_allocated_clusters": 0, 00:32:41.639 "snapshot": false, 00:32:41.639 "clone": false, 00:32:41.639 "esnap_clone": false 00:32:41.639 } 00:32:41.639 } 00:32:41.639 } 00:32:41.639 ] 00:32:41.639 10:42:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:32:41.639 10:42:54 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:32:41.639 10:42:54 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:32:41.898 [2024-07-26 10:42:54.576162] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:32:41.898 COMP_lvs0/lv0 00:32:41.898 10:42:54 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.898 10:42:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:42.157 10:42:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:32:42.157 [ 00:32:42.157 { 00:32:42.157 "name": "COMP_lvs0/lv0", 00:32:42.157 "aliases": [ 00:32:42.157 "d806b376-381d-5a67-9254-e0a14d9ec655" 00:32:42.157 ], 00:32:42.157 "product_name": "compress", 00:32:42.157 "block_size": 512, 00:32:42.157 "num_blocks": 200704, 00:32:42.157 "uuid": "d806b376-381d-5a67-9254-e0a14d9ec655", 00:32:42.157 "assigned_rate_limits": { 00:32:42.157 "rw_ios_per_sec": 0, 00:32:42.157 "rw_mbytes_per_sec": 0, 00:32:42.157 "r_mbytes_per_sec": 0, 00:32:42.157 "w_mbytes_per_sec": 0 00:32:42.157 }, 00:32:42.157 "claimed": false, 00:32:42.157 "zoned": false, 00:32:42.157 "supported_io_types": { 00:32:42.157 "read": true, 00:32:42.157 "write": true, 00:32:42.157 "unmap": false, 00:32:42.157 "flush": false, 00:32:42.157 "reset": false, 00:32:42.157 "nvme_admin": false, 00:32:42.157 "nvme_io": false, 00:32:42.157 "nvme_io_md": false, 00:32:42.157 "write_zeroes": true, 00:32:42.157 "zcopy": false, 00:32:42.157 "get_zone_info": false, 00:32:42.157 "zone_management": false, 00:32:42.157 "zone_append": false, 00:32:42.157 "compare": false, 00:32:42.157 "compare_and_write": false, 00:32:42.157 "abort": false, 00:32:42.157 "seek_hole": false, 00:32:42.157 "seek_data": false, 00:32:42.157 "copy": false, 00:32:42.157 "nvme_iov_md": false 00:32:42.157 }, 00:32:42.157 "driver_specific": { 00:32:42.157 "compress": { 00:32:42.157 "name": "COMP_lvs0/lv0", 00:32:42.157 "base_bdev_name": "0d00e917-80a0-4216-aed4-c176ed49a39c", 00:32:42.157 "pm_path": "/tmp/pmem/d3276a62-051a-4881-8e80-924d1d3ddb2b" 00:32:42.157 } 00:32:42.157 } 00:32:42.157 } 00:32:42.157 ] 00:32:42.157 10:42:55 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:32:42.157 10:42:55 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:42.416 10:42:55 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:32:42.675 10:42:55 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:42.934 [2024-07-26 10:42:55.705238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.934 10:42:55 compress_compdev -- compress/compress.sh@109 -- # perf_pid=3562686 00:32:42.934 10:42:55 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:42.934 10:42:55 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:32:42.934 10:42:55 compress_compdev -- compress/compress.sh@113 -- # wait 3562686 00:32:43.192 [2024-07-26 10:42:55.960778] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:15.273 Initializing NVMe Controllers 00:33:15.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:15.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:15.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:15.273 Initialization complete. Launching workers. 00:33:15.273 ======================================================== 00:33:15.273 Latency(us) 00:33:15.273 Device Information : IOPS MiB/s Average min max 00:33:15.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5019.97 19.61 12751.03 1082.37 28546.13 00:33:15.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 3142.93 12.28 20366.57 2737.34 38897.57 00:33:15.273 ======================================================== 00:33:15.273 Total : 8162.90 31.89 15683.22 1082.37 38897.57 00:33:15.273 00:33:15.273 10:43:26 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:33:15.273 10:43:26 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:15.273 10:43:26 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:15.273 10:43:26 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:33:15.273 10:43:26 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@117 -- # sync 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:15.273 rmmod nvme_tcp 00:33:15.273 rmmod nvme_fabrics 00:33:15.273 rmmod nvme_keyring 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 3561408 ']' 00:33:15.273 10:43:26 compress_compdev -- nvmf/common.sh@490 -- # killprocess 3561408 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3561408 ']' 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3561408 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3561408 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3561408' 00:33:15.273 killing process with pid 3561408 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@969 -- # kill 3561408 00:33:15.273 10:43:26 compress_compdev -- common/autotest_common.sh@974 -- # wait 3561408 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.656 10:43:29 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:16.656 10:43:29 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.656 10:43:29 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:16.656 10:43:29 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:33:16.656 00:33:16.656 real 2m12.130s 00:33:16.656 user 6m2.524s 00:33:16.656 sys 0m22.218s 00:33:16.656 10:43:29 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.656 10:43:29 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:33:16.656 ************************************ 00:33:16.656 END TEST compress_compdev 00:33:16.656 ************************************ 00:33:16.656 10:43:29 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:33:16.656 10:43:29 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:16.656 10:43:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.656 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:33:16.656 ************************************ 00:33:16.656 START TEST compress_isal 00:33:16.656 ************************************ 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:33:16.656 * Looking for test storage... 00:33:16.656 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:33:16.656 10:43:29 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.656 10:43:29 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.656 10:43:29 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.656 10:43:29 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.656 10:43:29 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.656 10:43:29 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.656 10:43:29 compress_isal -- paths/export.sh@5 -- # export PATH 00:33:16.656 10:43:29 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@47 -- # : 0 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.656 10:43:29 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3568144 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3568144 00:33:16.656 10:43:29 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3568144 ']' 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.656 10:43:29 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:33:16.657 [2024-07-26 10:43:29.511304] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:16.657 [2024-07-26 10:43:29.511361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568144 ] 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.916 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:16.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:16.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:16.917 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:16.917 [2024-07-26 10:43:29.634202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:16.917 [2024-07-26 10:43:29.678337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.917 [2024-07-26 10:43:29.678344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.853 10:43:30 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.853 10:43:30 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:33:17.853 10:43:30 compress_isal -- compress/compress.sh@74 -- # create_vols 00:33:17.853 10:43:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:17.853 10:43:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:21.142 10:43:33 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:21.142 10:43:33 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:33:21.142 [ 00:33:21.142 { 00:33:21.142 "name": "Nvme0n1", 00:33:21.142 "aliases": [ 00:33:21.142 "5b3cd936-7335-4418-b427-72136b47b5b8" 00:33:21.142 ], 00:33:21.142 "product_name": "NVMe disk", 00:33:21.142 "block_size": 512, 00:33:21.142 "num_blocks": 3907029168, 00:33:21.142 "uuid": "5b3cd936-7335-4418-b427-72136b47b5b8", 00:33:21.142 "assigned_rate_limits": { 00:33:21.142 "rw_ios_per_sec": 0, 00:33:21.142 "rw_mbytes_per_sec": 0, 00:33:21.142 "r_mbytes_per_sec": 0, 00:33:21.142 "w_mbytes_per_sec": 0 00:33:21.142 }, 00:33:21.142 "claimed": false, 00:33:21.142 "zoned": false, 00:33:21.142 "supported_io_types": { 00:33:21.142 "read": true, 00:33:21.142 "write": true, 00:33:21.142 "unmap": true, 00:33:21.142 "flush": true, 00:33:21.142 "reset": true, 00:33:21.142 "nvme_admin": true, 00:33:21.142 "nvme_io": true, 00:33:21.142 "nvme_io_md": false, 00:33:21.142 "write_zeroes": true, 00:33:21.142 "zcopy": false, 00:33:21.142 "get_zone_info": false, 00:33:21.142 "zone_management": false, 00:33:21.143 "zone_append": false, 00:33:21.143 "compare": false, 00:33:21.143 "compare_and_write": false, 00:33:21.143 "abort": true, 00:33:21.143 "seek_hole": false, 00:33:21.143 "seek_data": false, 00:33:21.143 "copy": false, 00:33:21.143 "nvme_iov_md": false 00:33:21.143 }, 00:33:21.143 "driver_specific": { 00:33:21.143 "nvme": [ 00:33:21.143 { 00:33:21.143 "pci_address": "0000:d8:00.0", 00:33:21.143 "trid": { 00:33:21.143 "trtype": "PCIe", 00:33:21.143 "traddr": "0000:d8:00.0" 00:33:21.143 }, 00:33:21.143 "ctrlr_data": { 00:33:21.143 "cntlid": 0, 00:33:21.143 "vendor_id": "0x8086", 00:33:21.143 "model_number": "INTEL SSDPE2KX020T8", 00:33:21.143 "serial_number": "BTLJ125505KA2P0BGN", 00:33:21.143 "firmware_revision": "VDV10170", 00:33:21.143 "oacs": { 00:33:21.143 "security": 0, 00:33:21.143 "format": 1, 00:33:21.143 "firmware": 1, 00:33:21.143 "ns_manage": 1 00:33:21.143 }, 00:33:21.143 "multi_ctrlr": false, 00:33:21.143 "ana_reporting": false 00:33:21.143 }, 00:33:21.143 "vs": { 00:33:21.143 "nvme_version": "1.2" 00:33:21.143 }, 00:33:21.143 "ns_data": { 00:33:21.143 "id": 1, 00:33:21.143 "can_share": false 00:33:21.143 } 00:33:21.143 } 00:33:21.143 ], 00:33:21.143 "mp_policy": "active_passive" 00:33:21.143 } 00:33:21.143 } 00:33:21.143 ] 00:33:21.143 10:43:33 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:21.143 10:43:33 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:33:22.518 fb576bc8-3c77-4acf-a0ed-42405ba96044 00:33:22.518 10:43:35 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:33:22.776 95aae240-50eb-437a-86cf-7bc8761301a2 00:33:22.776 10:43:35 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:22.776 10:43:35 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:33:23.036 [ 00:33:23.036 { 00:33:23.036 "name": "95aae240-50eb-437a-86cf-7bc8761301a2", 00:33:23.036 "aliases": [ 00:33:23.036 "lvs0/lv0" 00:33:23.036 ], 00:33:23.036 "product_name": "Logical Volume", 00:33:23.036 "block_size": 512, 00:33:23.036 "num_blocks": 204800, 00:33:23.036 "uuid": "95aae240-50eb-437a-86cf-7bc8761301a2", 00:33:23.036 "assigned_rate_limits": { 00:33:23.036 "rw_ios_per_sec": 0, 00:33:23.036 "rw_mbytes_per_sec": 0, 00:33:23.036 "r_mbytes_per_sec": 0, 00:33:23.036 "w_mbytes_per_sec": 0 00:33:23.036 }, 00:33:23.036 "claimed": false, 00:33:23.036 "zoned": false, 00:33:23.036 "supported_io_types": { 00:33:23.036 "read": true, 00:33:23.036 "write": true, 00:33:23.036 "unmap": true, 00:33:23.036 "flush": false, 00:33:23.036 "reset": true, 00:33:23.036 "nvme_admin": false, 00:33:23.036 "nvme_io": false, 00:33:23.036 "nvme_io_md": false, 00:33:23.036 "write_zeroes": true, 00:33:23.036 "zcopy": false, 00:33:23.036 "get_zone_info": false, 00:33:23.036 "zone_management": false, 00:33:23.036 "zone_append": false, 00:33:23.036 "compare": false, 00:33:23.036 "compare_and_write": false, 00:33:23.036 "abort": false, 00:33:23.036 "seek_hole": true, 00:33:23.036 "seek_data": true, 00:33:23.036 "copy": false, 00:33:23.036 "nvme_iov_md": false 00:33:23.036 }, 00:33:23.036 "driver_specific": { 00:33:23.036 "lvol": { 00:33:23.037 "lvol_store_uuid": "fb576bc8-3c77-4acf-a0ed-42405ba96044", 00:33:23.037 "base_bdev": "Nvme0n1", 00:33:23.037 "thin_provision": true, 00:33:23.037 "num_allocated_clusters": 0, 00:33:23.037 "snapshot": false, 00:33:23.037 "clone": false, 00:33:23.037 "esnap_clone": false 00:33:23.037 } 00:33:23.037 } 00:33:23.037 } 00:33:23.037 ] 00:33:23.037 10:43:35 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:23.037 10:43:35 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:33:23.037 10:43:35 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:33:23.343 [2024-07-26 10:43:36.102464] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:33:23.343 COMP_lvs0/lv0 00:33:23.343 10:43:36 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:23.343 10:43:36 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:23.603 10:43:36 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:33:23.862 [ 00:33:23.862 { 00:33:23.862 "name": "COMP_lvs0/lv0", 00:33:23.862 "aliases": [ 00:33:23.862 "29a624c1-26b8-540f-a5da-1d158481985e" 00:33:23.862 ], 00:33:23.862 "product_name": "compress", 00:33:23.862 "block_size": 512, 00:33:23.862 "num_blocks": 200704, 00:33:23.862 "uuid": "29a624c1-26b8-540f-a5da-1d158481985e", 00:33:23.862 "assigned_rate_limits": { 00:33:23.862 "rw_ios_per_sec": 0, 00:33:23.862 "rw_mbytes_per_sec": 0, 00:33:23.862 "r_mbytes_per_sec": 0, 00:33:23.862 "w_mbytes_per_sec": 0 00:33:23.862 }, 00:33:23.862 "claimed": false, 00:33:23.862 "zoned": false, 00:33:23.862 "supported_io_types": { 00:33:23.862 "read": true, 00:33:23.862 "write": true, 00:33:23.862 "unmap": false, 00:33:23.862 "flush": false, 00:33:23.862 "reset": false, 00:33:23.862 "nvme_admin": false, 00:33:23.862 "nvme_io": false, 00:33:23.862 "nvme_io_md": false, 00:33:23.862 "write_zeroes": true, 00:33:23.862 "zcopy": false, 00:33:23.862 "get_zone_info": false, 00:33:23.862 "zone_management": false, 00:33:23.862 "zone_append": false, 00:33:23.862 "compare": false, 00:33:23.862 "compare_and_write": false, 00:33:23.862 "abort": false, 00:33:23.862 "seek_hole": false, 00:33:23.862 "seek_data": false, 00:33:23.862 "copy": false, 00:33:23.862 "nvme_iov_md": false 00:33:23.862 }, 00:33:23.862 "driver_specific": { 00:33:23.862 "compress": { 00:33:23.862 "name": "COMP_lvs0/lv0", 00:33:23.862 "base_bdev_name": "95aae240-50eb-437a-86cf-7bc8761301a2", 00:33:23.862 "pm_path": "/tmp/pmem/2b7de3d5-52ba-47d4-81ee-3e24596c6f58" 00:33:23.862 } 00:33:23.862 } 00:33:23.862 } 00:33:23.862 ] 00:33:23.862 10:43:36 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:23.862 10:43:36 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:23.862 Running I/O for 3 seconds... 00:33:27.151 00:33:27.152 Latency(us) 00:33:27.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.152 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:33:27.152 Verification LBA range: start 0x0 length 0x3100 00:33:27.152 COMP_lvs0/lv0 : 3.01 3494.00 13.65 0.00 0.00 9102.39 59.80 14680.06 00:33:27.152 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:33:27.152 Verification LBA range: start 0x3100 length 0x3100 00:33:27.152 COMP_lvs0/lv0 : 3.01 3530.31 13.79 0.00 0.00 9021.82 56.93 15204.35 00:33:27.152 =================================================================================================================== 00:33:27.152 Total : 7024.32 27.44 0.00 0.00 9061.92 56.93 15204.35 00:33:27.152 0 00:33:27.152 10:43:39 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:33:27.152 10:43:39 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:27.152 10:43:39 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:27.410 10:43:40 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:27.410 10:43:40 compress_isal -- compress/compress.sh@78 -- # killprocess 3568144 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3568144 ']' 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3568144 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@955 -- # uname 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3568144 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3568144' 00:33:27.410 killing process with pid 3568144 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@969 -- # kill 3568144 00:33:27.410 Received shutdown signal, test time was about 3.000000 seconds 00:33:27.410 00:33:27.410 Latency(us) 00:33:27.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.410 =================================================================================================================== 00:33:27.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.410 10:43:40 compress_isal -- common/autotest_common.sh@974 -- # wait 3568144 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3570282 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:33:29.948 10:43:42 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3570282 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3570282 ']' 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:29.948 10:43:42 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:33:29.948 [2024-07-26 10:43:42.609261] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:29.948 [2024-07-26 10:43:42.609325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570282 ] 00:33:29.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.948 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:29.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:29.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:29.949 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:29.949 [2024-07-26 10:43:42.733007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:29.949 [2024-07-26 10:43:42.776806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.949 [2024-07-26 10:43:42.776812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.887 10:43:43 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.887 10:43:43 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:33:30.887 10:43:43 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:33:30.887 10:43:43 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:30.887 10:43:43 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:34.177 10:43:46 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:34.177 10:43:46 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:33:34.177 [ 00:33:34.177 { 00:33:34.177 "name": "Nvme0n1", 00:33:34.177 "aliases": [ 00:33:34.177 "9a0166d9-a418-4dbd-acbe-0fa5c54d3869" 00:33:34.177 ], 00:33:34.177 "product_name": "NVMe disk", 00:33:34.177 "block_size": 512, 00:33:34.177 "num_blocks": 3907029168, 00:33:34.177 "uuid": "9a0166d9-a418-4dbd-acbe-0fa5c54d3869", 00:33:34.177 "assigned_rate_limits": { 00:33:34.177 "rw_ios_per_sec": 0, 00:33:34.177 "rw_mbytes_per_sec": 0, 00:33:34.177 "r_mbytes_per_sec": 0, 00:33:34.177 "w_mbytes_per_sec": 0 00:33:34.177 }, 00:33:34.177 "claimed": false, 00:33:34.177 "zoned": false, 00:33:34.177 "supported_io_types": { 00:33:34.177 "read": true, 00:33:34.177 "write": true, 00:33:34.177 "unmap": true, 00:33:34.177 "flush": true, 00:33:34.177 "reset": true, 00:33:34.177 "nvme_admin": true, 00:33:34.177 "nvme_io": true, 00:33:34.177 "nvme_io_md": false, 00:33:34.177 "write_zeroes": true, 00:33:34.177 "zcopy": false, 00:33:34.177 "get_zone_info": false, 00:33:34.177 "zone_management": false, 00:33:34.177 "zone_append": false, 00:33:34.177 "compare": false, 00:33:34.177 "compare_and_write": false, 00:33:34.177 "abort": true, 00:33:34.177 "seek_hole": false, 00:33:34.177 "seek_data": false, 00:33:34.177 "copy": false, 00:33:34.177 "nvme_iov_md": false 00:33:34.177 }, 00:33:34.177 "driver_specific": { 00:33:34.177 "nvme": [ 00:33:34.177 { 00:33:34.177 "pci_address": "0000:d8:00.0", 00:33:34.177 "trid": { 00:33:34.177 "trtype": "PCIe", 00:33:34.177 "traddr": "0000:d8:00.0" 00:33:34.177 }, 00:33:34.177 "ctrlr_data": { 00:33:34.177 "cntlid": 0, 00:33:34.177 "vendor_id": "0x8086", 00:33:34.177 "model_number": "INTEL SSDPE2KX020T8", 00:33:34.177 "serial_number": "BTLJ125505KA2P0BGN", 00:33:34.177 "firmware_revision": "VDV10170", 00:33:34.177 "oacs": { 00:33:34.177 "security": 0, 00:33:34.177 "format": 1, 00:33:34.177 "firmware": 1, 00:33:34.177 "ns_manage": 1 00:33:34.177 }, 00:33:34.177 "multi_ctrlr": false, 00:33:34.177 "ana_reporting": false 00:33:34.177 }, 00:33:34.177 "vs": { 00:33:34.177 "nvme_version": "1.2" 00:33:34.177 }, 00:33:34.177 "ns_data": { 00:33:34.177 "id": 1, 00:33:34.177 "can_share": false 00:33:34.177 } 00:33:34.177 } 00:33:34.177 ], 00:33:34.177 "mp_policy": "active_passive" 00:33:34.177 } 00:33:34.177 } 00:33:34.177 ] 00:33:34.177 10:43:47 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:34.177 10:43:47 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:33:35.556 dac1dd72-5c84-452b-adcf-53dfaafc86bb 00:33:35.556 10:43:48 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:33:35.556 ab3583bf-f63f-432d-8fae-f3fa2cf4f8e6 00:33:35.816 10:43:48 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:35.816 10:43:48 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:33:36.075 [ 00:33:36.075 { 00:33:36.075 "name": "ab3583bf-f63f-432d-8fae-f3fa2cf4f8e6", 00:33:36.075 "aliases": [ 00:33:36.075 "lvs0/lv0" 00:33:36.075 ], 00:33:36.075 "product_name": "Logical Volume", 00:33:36.075 "block_size": 512, 00:33:36.075 "num_blocks": 204800, 00:33:36.075 "uuid": "ab3583bf-f63f-432d-8fae-f3fa2cf4f8e6", 00:33:36.075 "assigned_rate_limits": { 00:33:36.075 "rw_ios_per_sec": 0, 00:33:36.075 "rw_mbytes_per_sec": 0, 00:33:36.075 "r_mbytes_per_sec": 0, 00:33:36.075 "w_mbytes_per_sec": 0 00:33:36.075 }, 00:33:36.075 "claimed": false, 00:33:36.075 "zoned": false, 00:33:36.075 "supported_io_types": { 00:33:36.075 "read": true, 00:33:36.075 "write": true, 00:33:36.075 "unmap": true, 00:33:36.075 "flush": false, 00:33:36.075 "reset": true, 00:33:36.075 "nvme_admin": false, 00:33:36.075 "nvme_io": false, 00:33:36.075 "nvme_io_md": false, 00:33:36.075 "write_zeroes": true, 00:33:36.075 "zcopy": false, 00:33:36.075 "get_zone_info": false, 00:33:36.075 "zone_management": false, 00:33:36.075 "zone_append": false, 00:33:36.075 "compare": false, 00:33:36.075 "compare_and_write": false, 00:33:36.075 "abort": false, 00:33:36.075 "seek_hole": true, 00:33:36.075 "seek_data": true, 00:33:36.075 "copy": false, 00:33:36.075 "nvme_iov_md": false 00:33:36.075 }, 00:33:36.075 "driver_specific": { 00:33:36.075 "lvol": { 00:33:36.075 "lvol_store_uuid": "dac1dd72-5c84-452b-adcf-53dfaafc86bb", 00:33:36.075 "base_bdev": "Nvme0n1", 00:33:36.075 "thin_provision": true, 00:33:36.075 "num_allocated_clusters": 0, 00:33:36.075 "snapshot": false, 00:33:36.075 "clone": false, 00:33:36.075 "esnap_clone": false 00:33:36.075 } 00:33:36.075 } 00:33:36.075 } 00:33:36.075 ] 00:33:36.075 10:43:48 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:36.075 10:43:48 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:33:36.075 10:43:48 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:33:36.334 [2024-07-26 10:43:49.131692] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:33:36.334 COMP_lvs0/lv0 00:33:36.334 10:43:49 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:36.334 10:43:49 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:36.593 10:43:49 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:33:36.852 [ 00:33:36.852 { 00:33:36.852 "name": "COMP_lvs0/lv0", 00:33:36.852 "aliases": [ 00:33:36.852 "4d25c8fe-88bc-5e9c-90a8-f580bd3db5a2" 00:33:36.852 ], 00:33:36.852 "product_name": "compress", 00:33:36.852 "block_size": 512, 00:33:36.852 "num_blocks": 200704, 00:33:36.852 "uuid": "4d25c8fe-88bc-5e9c-90a8-f580bd3db5a2", 00:33:36.852 "assigned_rate_limits": { 00:33:36.852 "rw_ios_per_sec": 0, 00:33:36.852 "rw_mbytes_per_sec": 0, 00:33:36.852 "r_mbytes_per_sec": 0, 00:33:36.852 "w_mbytes_per_sec": 0 00:33:36.852 }, 00:33:36.852 "claimed": false, 00:33:36.852 "zoned": false, 00:33:36.852 "supported_io_types": { 00:33:36.852 "read": true, 00:33:36.852 "write": true, 00:33:36.852 "unmap": false, 00:33:36.852 "flush": false, 00:33:36.852 "reset": false, 00:33:36.852 "nvme_admin": false, 00:33:36.852 "nvme_io": false, 00:33:36.852 "nvme_io_md": false, 00:33:36.852 "write_zeroes": true, 00:33:36.852 "zcopy": false, 00:33:36.852 "get_zone_info": false, 00:33:36.852 "zone_management": false, 00:33:36.852 "zone_append": false, 00:33:36.852 "compare": false, 00:33:36.852 "compare_and_write": false, 00:33:36.852 "abort": false, 00:33:36.852 "seek_hole": false, 00:33:36.852 "seek_data": false, 00:33:36.852 "copy": false, 00:33:36.852 "nvme_iov_md": false 00:33:36.852 }, 00:33:36.852 "driver_specific": { 00:33:36.852 "compress": { 00:33:36.852 "name": "COMP_lvs0/lv0", 00:33:36.852 "base_bdev_name": "ab3583bf-f63f-432d-8fae-f3fa2cf4f8e6", 00:33:36.852 "pm_path": "/tmp/pmem/c65b9e68-7734-4115-9b29-e351117dff8c" 00:33:36.852 } 00:33:36.852 } 00:33:36.852 } 00:33:36.852 ] 00:33:36.852 10:43:49 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:36.852 10:43:49 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:36.852 Running I/O for 3 seconds... 00:33:40.133 00:33:40.133 Latency(us) 00:33:40.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.133 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:33:40.133 Verification LBA range: start 0x0 length 0x3100 00:33:40.133 COMP_lvs0/lv0 : 3.01 3513.19 13.72 0.00 0.00 9055.85 59.80 13841.20 00:33:40.133 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:33:40.133 Verification LBA range: start 0x3100 length 0x3100 00:33:40.133 COMP_lvs0/lv0 : 3.01 3505.10 13.69 0.00 0.00 9085.50 55.71 14155.78 00:33:40.133 =================================================================================================================== 00:33:40.133 Total : 7018.29 27.42 0.00 0.00 9070.66 55.71 14155.78 00:33:40.133 0 00:33:40.133 10:43:52 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:33:40.133 10:43:52 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:40.133 10:43:52 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:40.390 10:43:53 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:40.390 10:43:53 compress_isal -- compress/compress.sh@78 -- # killprocess 3570282 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3570282 ']' 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3570282 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@955 -- # uname 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3570282 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3570282' 00:33:40.390 killing process with pid 3570282 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@969 -- # kill 3570282 00:33:40.390 Received shutdown signal, test time was about 3.000000 seconds 00:33:40.390 00:33:40.390 Latency(us) 00:33:40.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.390 =================================================================================================================== 00:33:40.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.390 10:43:53 compress_isal -- common/autotest_common.sh@974 -- # wait 3570282 00:33:42.927 10:43:55 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:33:42.927 10:43:55 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:33:42.927 10:43:55 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3572467 00:33:42.927 10:43:55 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:42.927 10:43:55 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:33:42.928 10:43:55 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3572467 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3572467 ']' 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.928 10:43:55 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:33:42.928 [2024-07-26 10:43:55.774249] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:42.928 [2024-07-26 10:43:55.774312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3572467 ] 00:33:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.186 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.186 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.186 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.186 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:43.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:43.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:43.187 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:43.187 [2024-07-26 10:43:55.895788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:43.187 [2024-07-26 10:43:55.940037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.187 [2024-07-26 10:43:55.940044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.123 10:43:56 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:44.123 10:43:56 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:33:44.123 10:43:56 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:33:44.123 10:43:56 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:44.123 10:43:56 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:33:47.412 10:43:59 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:47.412 10:43:59 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:47.412 10:44:00 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:33:47.412 [ 00:33:47.412 { 00:33:47.412 "name": "Nvme0n1", 00:33:47.412 "aliases": [ 00:33:47.412 "7881426f-16d2-485a-92c6-cbcff00f2078" 00:33:47.412 ], 00:33:47.412 "product_name": "NVMe disk", 00:33:47.412 "block_size": 512, 00:33:47.412 "num_blocks": 3907029168, 00:33:47.412 "uuid": "7881426f-16d2-485a-92c6-cbcff00f2078", 00:33:47.412 "assigned_rate_limits": { 00:33:47.412 "rw_ios_per_sec": 0, 00:33:47.412 "rw_mbytes_per_sec": 0, 00:33:47.412 "r_mbytes_per_sec": 0, 00:33:47.412 "w_mbytes_per_sec": 0 00:33:47.412 }, 00:33:47.412 "claimed": false, 00:33:47.412 "zoned": false, 00:33:47.412 "supported_io_types": { 00:33:47.412 "read": true, 00:33:47.412 "write": true, 00:33:47.412 "unmap": true, 00:33:47.412 "flush": true, 00:33:47.412 "reset": true, 00:33:47.412 "nvme_admin": true, 00:33:47.412 "nvme_io": true, 00:33:47.412 "nvme_io_md": false, 00:33:47.412 "write_zeroes": true, 00:33:47.412 "zcopy": false, 00:33:47.412 "get_zone_info": false, 00:33:47.412 "zone_management": false, 00:33:47.412 "zone_append": false, 00:33:47.412 "compare": false, 00:33:47.412 "compare_and_write": false, 00:33:47.412 "abort": true, 00:33:47.412 "seek_hole": false, 00:33:47.412 "seek_data": false, 00:33:47.412 "copy": false, 00:33:47.412 "nvme_iov_md": false 00:33:47.412 }, 00:33:47.412 "driver_specific": { 00:33:47.412 "nvme": [ 00:33:47.412 { 00:33:47.412 "pci_address": "0000:d8:00.0", 00:33:47.412 "trid": { 00:33:47.412 "trtype": "PCIe", 00:33:47.412 "traddr": "0000:d8:00.0" 00:33:47.412 }, 00:33:47.412 "ctrlr_data": { 00:33:47.412 "cntlid": 0, 00:33:47.412 "vendor_id": "0x8086", 00:33:47.412 "model_number": "INTEL SSDPE2KX020T8", 00:33:47.413 "serial_number": "BTLJ125505KA2P0BGN", 00:33:47.413 "firmware_revision": "VDV10170", 00:33:47.413 "oacs": { 00:33:47.413 "security": 0, 00:33:47.413 "format": 1, 00:33:47.413 "firmware": 1, 00:33:47.413 "ns_manage": 1 00:33:47.413 }, 00:33:47.413 "multi_ctrlr": false, 00:33:47.413 "ana_reporting": false 00:33:47.413 }, 00:33:47.413 "vs": { 00:33:47.413 "nvme_version": "1.2" 00:33:47.413 }, 00:33:47.413 "ns_data": { 00:33:47.413 "id": 1, 00:33:47.413 "can_share": false 00:33:47.413 } 00:33:47.413 } 00:33:47.413 ], 00:33:47.413 "mp_policy": "active_passive" 00:33:47.413 } 00:33:47.413 } 00:33:47.413 ] 00:33:47.413 10:44:00 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:47.413 10:44:00 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:33:48.790 e29472b1-855f-4cc6-b7d3-61231d07c53b 00:33:48.790 10:44:01 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:33:49.048 d539766f-6ea9-4b76-9f64-920892f5953e 00:33:49.048 10:44:01 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:49.048 10:44:01 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:49.306 10:44:01 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:33:49.306 [ 00:33:49.306 { 00:33:49.306 "name": "d539766f-6ea9-4b76-9f64-920892f5953e", 00:33:49.306 "aliases": [ 00:33:49.306 "lvs0/lv0" 00:33:49.306 ], 00:33:49.306 "product_name": "Logical Volume", 00:33:49.306 "block_size": 512, 00:33:49.306 "num_blocks": 204800, 00:33:49.306 "uuid": "d539766f-6ea9-4b76-9f64-920892f5953e", 00:33:49.306 "assigned_rate_limits": { 00:33:49.306 "rw_ios_per_sec": 0, 00:33:49.306 "rw_mbytes_per_sec": 0, 00:33:49.306 "r_mbytes_per_sec": 0, 00:33:49.306 "w_mbytes_per_sec": 0 00:33:49.306 }, 00:33:49.306 "claimed": false, 00:33:49.306 "zoned": false, 00:33:49.306 "supported_io_types": { 00:33:49.306 "read": true, 00:33:49.306 "write": true, 00:33:49.306 "unmap": true, 00:33:49.306 "flush": false, 00:33:49.306 "reset": true, 00:33:49.306 "nvme_admin": false, 00:33:49.306 "nvme_io": false, 00:33:49.306 "nvme_io_md": false, 00:33:49.306 "write_zeroes": true, 00:33:49.306 "zcopy": false, 00:33:49.306 "get_zone_info": false, 00:33:49.306 "zone_management": false, 00:33:49.306 "zone_append": false, 00:33:49.306 "compare": false, 00:33:49.306 "compare_and_write": false, 00:33:49.306 "abort": false, 00:33:49.306 "seek_hole": true, 00:33:49.306 "seek_data": true, 00:33:49.306 "copy": false, 00:33:49.306 "nvme_iov_md": false 00:33:49.306 }, 00:33:49.306 "driver_specific": { 00:33:49.306 "lvol": { 00:33:49.306 "lvol_store_uuid": "e29472b1-855f-4cc6-b7d3-61231d07c53b", 00:33:49.306 "base_bdev": "Nvme0n1", 00:33:49.306 "thin_provision": true, 00:33:49.306 "num_allocated_clusters": 0, 00:33:49.306 "snapshot": false, 00:33:49.306 "clone": false, 00:33:49.306 "esnap_clone": false 00:33:49.306 } 00:33:49.306 } 00:33:49.306 } 00:33:49.306 ] 00:33:49.306 10:44:02 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:49.306 10:44:02 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:33:49.306 10:44:02 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:33:49.565 [2024-07-26 10:44:02.370878] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:33:49.565 COMP_lvs0/lv0 00:33:49.565 10:44:02 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@901 -- # local i 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:49.565 10:44:02 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:49.824 10:44:02 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:33:50.083 [ 00:33:50.083 { 00:33:50.083 "name": "COMP_lvs0/lv0", 00:33:50.083 "aliases": [ 00:33:50.083 "876c4229-22f9-59dd-b805-9c285e0d50b4" 00:33:50.083 ], 00:33:50.083 "product_name": "compress", 00:33:50.083 "block_size": 4096, 00:33:50.083 "num_blocks": 25088, 00:33:50.083 "uuid": "876c4229-22f9-59dd-b805-9c285e0d50b4", 00:33:50.083 "assigned_rate_limits": { 00:33:50.083 "rw_ios_per_sec": 0, 00:33:50.083 "rw_mbytes_per_sec": 0, 00:33:50.083 "r_mbytes_per_sec": 0, 00:33:50.083 "w_mbytes_per_sec": 0 00:33:50.083 }, 00:33:50.083 "claimed": false, 00:33:50.083 "zoned": false, 00:33:50.083 "supported_io_types": { 00:33:50.083 "read": true, 00:33:50.083 "write": true, 00:33:50.083 "unmap": false, 00:33:50.083 "flush": false, 00:33:50.083 "reset": false, 00:33:50.083 "nvme_admin": false, 00:33:50.083 "nvme_io": false, 00:33:50.083 "nvme_io_md": false, 00:33:50.083 "write_zeroes": true, 00:33:50.083 "zcopy": false, 00:33:50.083 "get_zone_info": false, 00:33:50.083 "zone_management": false, 00:33:50.083 "zone_append": false, 00:33:50.083 "compare": false, 00:33:50.083 "compare_and_write": false, 00:33:50.083 "abort": false, 00:33:50.083 "seek_hole": false, 00:33:50.083 "seek_data": false, 00:33:50.083 "copy": false, 00:33:50.083 "nvme_iov_md": false 00:33:50.083 }, 00:33:50.083 "driver_specific": { 00:33:50.083 "compress": { 00:33:50.083 "name": "COMP_lvs0/lv0", 00:33:50.083 "base_bdev_name": "d539766f-6ea9-4b76-9f64-920892f5953e", 00:33:50.083 "pm_path": "/tmp/pmem/d3c92956-c6ab-4992-9f01-b86745f9d5c4" 00:33:50.083 } 00:33:50.083 } 00:33:50.083 } 00:33:50.083 ] 00:33:50.083 10:44:02 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:33:50.083 10:44:02 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:50.083 Running I/O for 3 seconds... 00:33:53.368 00:33:53.368 Latency(us) 00:33:53.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.368 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:33:53.368 Verification LBA range: start 0x0 length 0x3100 00:33:53.368 COMP_lvs0/lv0 : 3.00 3529.10 13.79 0.00 0.00 9013.75 60.21 14155.78 00:33:53.368 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:33:53.368 Verification LBA range: start 0x3100 length 0x3100 00:33:53.368 COMP_lvs0/lv0 : 3.01 3533.27 13.80 0.00 0.00 9003.29 57.34 14050.92 00:33:53.368 =================================================================================================================== 00:33:53.368 Total : 7062.37 27.59 0.00 0.00 9008.51 57.34 14155.78 00:33:53.368 0 00:33:53.368 10:44:05 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:33:53.368 10:44:05 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:33:53.368 10:44:06 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:33:53.627 10:44:06 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:33:53.627 10:44:06 compress_isal -- compress/compress.sh@78 -- # killprocess 3572467 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3572467 ']' 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3572467 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@955 -- # uname 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3572467 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3572467' 00:33:53.627 killing process with pid 3572467 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@969 -- # kill 3572467 00:33:53.627 Received shutdown signal, test time was about 3.000000 seconds 00:33:53.627 00:33:53.627 Latency(us) 00:33:53.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.627 =================================================================================================================== 00:33:53.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.627 10:44:06 compress_isal -- common/autotest_common.sh@974 -- # wait 3572467 00:33:56.170 10:44:08 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:33:56.171 10:44:08 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:33:56.171 10:44:08 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=3574647 00:33:56.171 10:44:08 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:56.171 10:44:08 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:33:56.171 10:44:08 compress_isal -- compress/compress.sh@57 -- # waitforlisten 3574647 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3574647 ']' 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:56.171 10:44:08 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:33:56.171 [2024-07-26 10:44:08.847965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:33:56.171 [2024-07-26 10:44:08.848028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574647 ] 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:56.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:56.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:56.171 [2024-07-26 10:44:08.984481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.171 [2024-07-26 10:44:09.029920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.171 [2024-07-26 10:44:09.030014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.171 [2024-07-26 10:44:09.030018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.109 10:44:09 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:57.109 10:44:09 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:33:57.109 10:44:09 compress_isal -- compress/compress.sh@58 -- # create_vols 00:33:57.109 10:44:09 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:57.109 10:44:09 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:00.399 10:44:12 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:00.399 10:44:12 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:00.399 10:44:13 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:00.399 [ 00:34:00.399 { 00:34:00.399 "name": "Nvme0n1", 00:34:00.399 "aliases": [ 00:34:00.399 "d519cad7-4189-4bac-91a7-0d0f87444d6a" 00:34:00.399 ], 00:34:00.399 "product_name": "NVMe disk", 00:34:00.399 "block_size": 512, 00:34:00.399 "num_blocks": 3907029168, 00:34:00.399 "uuid": "d519cad7-4189-4bac-91a7-0d0f87444d6a", 00:34:00.399 "assigned_rate_limits": { 00:34:00.399 "rw_ios_per_sec": 0, 00:34:00.399 "rw_mbytes_per_sec": 0, 00:34:00.399 "r_mbytes_per_sec": 0, 00:34:00.399 "w_mbytes_per_sec": 0 00:34:00.399 }, 00:34:00.399 "claimed": false, 00:34:00.399 "zoned": false, 00:34:00.399 "supported_io_types": { 00:34:00.399 "read": true, 00:34:00.399 "write": true, 00:34:00.399 "unmap": true, 00:34:00.399 "flush": true, 00:34:00.399 "reset": true, 00:34:00.399 "nvme_admin": true, 00:34:00.399 "nvme_io": true, 00:34:00.399 "nvme_io_md": false, 00:34:00.399 "write_zeroes": true, 00:34:00.399 "zcopy": false, 00:34:00.399 "get_zone_info": false, 00:34:00.399 "zone_management": false, 00:34:00.399 "zone_append": false, 00:34:00.399 "compare": false, 00:34:00.399 "compare_and_write": false, 00:34:00.399 "abort": true, 00:34:00.399 "seek_hole": false, 00:34:00.399 "seek_data": false, 00:34:00.399 "copy": false, 00:34:00.399 "nvme_iov_md": false 00:34:00.399 }, 00:34:00.399 "driver_specific": { 00:34:00.399 "nvme": [ 00:34:00.399 { 00:34:00.399 "pci_address": "0000:d8:00.0", 00:34:00.399 "trid": { 00:34:00.399 "trtype": "PCIe", 00:34:00.399 "traddr": "0000:d8:00.0" 00:34:00.399 }, 00:34:00.399 "ctrlr_data": { 00:34:00.399 "cntlid": 0, 00:34:00.399 "vendor_id": "0x8086", 00:34:00.399 "model_number": "INTEL SSDPE2KX020T8", 00:34:00.399 "serial_number": "BTLJ125505KA2P0BGN", 00:34:00.399 "firmware_revision": "VDV10170", 00:34:00.399 "oacs": { 00:34:00.399 "security": 0, 00:34:00.400 "format": 1, 00:34:00.400 "firmware": 1, 00:34:00.400 "ns_manage": 1 00:34:00.400 }, 00:34:00.400 "multi_ctrlr": false, 00:34:00.400 "ana_reporting": false 00:34:00.400 }, 00:34:00.400 "vs": { 00:34:00.400 "nvme_version": "1.2" 00:34:00.400 }, 00:34:00.400 "ns_data": { 00:34:00.400 "id": 1, 00:34:00.400 "can_share": false 00:34:00.400 } 00:34:00.400 } 00:34:00.400 ], 00:34:00.400 "mp_policy": "active_passive" 00:34:00.400 } 00:34:00.400 } 00:34:00.400 ] 00:34:00.657 10:44:13 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:00.657 10:44:13 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:01.667 5ba5f675-d946-41f6-b6a9-fd98ed641c72 00:34:01.667 10:44:14 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:01.925 bf690d8c-412c-4cdf-89ce-1074d38e40fb 00:34:01.925 10:44:14 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:01.925 10:44:14 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:02.183 10:44:14 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:02.442 [ 00:34:02.442 { 00:34:02.442 "name": "bf690d8c-412c-4cdf-89ce-1074d38e40fb", 00:34:02.442 "aliases": [ 00:34:02.442 "lvs0/lv0" 00:34:02.442 ], 00:34:02.442 "product_name": "Logical Volume", 00:34:02.442 "block_size": 512, 00:34:02.442 "num_blocks": 204800, 00:34:02.442 "uuid": "bf690d8c-412c-4cdf-89ce-1074d38e40fb", 00:34:02.442 "assigned_rate_limits": { 00:34:02.442 "rw_ios_per_sec": 0, 00:34:02.442 "rw_mbytes_per_sec": 0, 00:34:02.442 "r_mbytes_per_sec": 0, 00:34:02.442 "w_mbytes_per_sec": 0 00:34:02.442 }, 00:34:02.442 "claimed": false, 00:34:02.442 "zoned": false, 00:34:02.442 "supported_io_types": { 00:34:02.442 "read": true, 00:34:02.442 "write": true, 00:34:02.442 "unmap": true, 00:34:02.442 "flush": false, 00:34:02.442 "reset": true, 00:34:02.442 "nvme_admin": false, 00:34:02.442 "nvme_io": false, 00:34:02.442 "nvme_io_md": false, 00:34:02.442 "write_zeroes": true, 00:34:02.442 "zcopy": false, 00:34:02.442 "get_zone_info": false, 00:34:02.442 "zone_management": false, 00:34:02.442 "zone_append": false, 00:34:02.442 "compare": false, 00:34:02.442 "compare_and_write": false, 00:34:02.442 "abort": false, 00:34:02.442 "seek_hole": true, 00:34:02.442 "seek_data": true, 00:34:02.442 "copy": false, 00:34:02.442 "nvme_iov_md": false 00:34:02.442 }, 00:34:02.442 "driver_specific": { 00:34:02.442 "lvol": { 00:34:02.442 "lvol_store_uuid": "5ba5f675-d946-41f6-b6a9-fd98ed641c72", 00:34:02.442 "base_bdev": "Nvme0n1", 00:34:02.442 "thin_provision": true, 00:34:02.442 "num_allocated_clusters": 0, 00:34:02.442 "snapshot": false, 00:34:02.442 "clone": false, 00:34:02.442 "esnap_clone": false 00:34:02.442 } 00:34:02.442 } 00:34:02.442 } 00:34:02.442 ] 00:34:02.442 10:44:15 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:02.442 10:44:15 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:02.442 10:44:15 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:02.701 [2024-07-26 10:44:15.407764] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:02.701 COMP_lvs0/lv0 00:34:02.701 10:44:15 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:02.701 10:44:15 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:02.960 10:44:15 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:03.219 [ 00:34:03.219 { 00:34:03.219 "name": "COMP_lvs0/lv0", 00:34:03.219 "aliases": [ 00:34:03.219 "decba8f6-f067-5b14-a264-35b52687786c" 00:34:03.219 ], 00:34:03.219 "product_name": "compress", 00:34:03.219 "block_size": 512, 00:34:03.219 "num_blocks": 200704, 00:34:03.219 "uuid": "decba8f6-f067-5b14-a264-35b52687786c", 00:34:03.219 "assigned_rate_limits": { 00:34:03.219 "rw_ios_per_sec": 0, 00:34:03.219 "rw_mbytes_per_sec": 0, 00:34:03.219 "r_mbytes_per_sec": 0, 00:34:03.219 "w_mbytes_per_sec": 0 00:34:03.219 }, 00:34:03.219 "claimed": false, 00:34:03.219 "zoned": false, 00:34:03.219 "supported_io_types": { 00:34:03.219 "read": true, 00:34:03.219 "write": true, 00:34:03.219 "unmap": false, 00:34:03.219 "flush": false, 00:34:03.219 "reset": false, 00:34:03.219 "nvme_admin": false, 00:34:03.219 "nvme_io": false, 00:34:03.219 "nvme_io_md": false, 00:34:03.219 "write_zeroes": true, 00:34:03.219 "zcopy": false, 00:34:03.219 "get_zone_info": false, 00:34:03.219 "zone_management": false, 00:34:03.219 "zone_append": false, 00:34:03.219 "compare": false, 00:34:03.219 "compare_and_write": false, 00:34:03.219 "abort": false, 00:34:03.219 "seek_hole": false, 00:34:03.219 "seek_data": false, 00:34:03.219 "copy": false, 00:34:03.219 "nvme_iov_md": false 00:34:03.219 }, 00:34:03.219 "driver_specific": { 00:34:03.219 "compress": { 00:34:03.219 "name": "COMP_lvs0/lv0", 00:34:03.219 "base_bdev_name": "bf690d8c-412c-4cdf-89ce-1074d38e40fb", 00:34:03.219 "pm_path": "/tmp/pmem/ed838d5c-bfb8-430d-a0fd-771ea53f0682" 00:34:03.219 } 00:34:03.219 } 00:34:03.219 } 00:34:03.219 ] 00:34:03.219 10:44:15 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:03.219 10:44:15 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:03.219 I/O targets: 00:34:03.219 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:34:03.219 00:34:03.219 00:34:03.219 CUnit - A unit testing framework for C - Version 2.1-3 00:34:03.219 http://cunit.sourceforge.net/ 00:34:03.219 00:34:03.219 00:34:03.219 Suite: bdevio tests on: COMP_lvs0/lv0 00:34:03.219 Test: blockdev write read block ...passed 00:34:03.219 Test: blockdev write zeroes read block ...passed 00:34:03.219 Test: blockdev write zeroes read no split ...passed 00:34:03.219 Test: blockdev write zeroes read split ...passed 00:34:03.219 Test: blockdev write zeroes read split partial ...passed 00:34:03.219 Test: blockdev reset ...[2024-07-26 10:44:16.036106] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:34:03.219 passed 00:34:03.219 Test: blockdev write read 8 blocks ...passed 00:34:03.219 Test: blockdev write read size > 128k ...passed 00:34:03.219 Test: blockdev write read invalid size ...passed 00:34:03.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:03.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:03.219 Test: blockdev write read max offset ...passed 00:34:03.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:03.219 Test: blockdev writev readv 8 blocks ...passed 00:34:03.219 Test: blockdev writev readv 30 x 1block ...passed 00:34:03.219 Test: blockdev writev readv block ...passed 00:34:03.219 Test: blockdev writev readv size > 128k ...passed 00:34:03.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:03.219 Test: blockdev comparev and writev ...passed 00:34:03.219 Test: blockdev nvme passthru rw ...passed 00:34:03.219 Test: blockdev nvme passthru vendor specific ...passed 00:34:03.219 Test: blockdev nvme admin passthru ...passed 00:34:03.219 Test: blockdev copy ...passed 00:34:03.219 00:34:03.219 Run Summary: Type Total Ran Passed Failed Inactive 00:34:03.219 suites 1 1 n/a 0 0 00:34:03.219 tests 23 23 23 0 0 00:34:03.219 asserts 130 130 130 0 n/a 00:34:03.219 00:34:03.219 Elapsed time = 0.177 seconds 00:34:03.219 0 00:34:03.219 10:44:16 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:34:03.219 10:44:16 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:34:03.478 10:44:16 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:34:03.737 10:44:16 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:34:03.737 10:44:16 compress_isal -- compress/compress.sh@62 -- # killprocess 3574647 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3574647 ']' 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3574647 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@955 -- # uname 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3574647 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3574647' 00:34:03.737 killing process with pid 3574647 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@969 -- # kill 3574647 00:34:03.737 10:44:16 compress_isal -- common/autotest_common.sh@974 -- # wait 3574647 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3576408 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:34:06.270 10:44:19 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3576408 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3576408 ']' 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.270 10:44:19 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:34:06.270 [2024-07-26 10:44:19.075888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:06.270 [2024-07-26 10:44:19.075953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576408 ] 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:06.270 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.270 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:06.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:06.271 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:06.530 [2024-07-26 10:44:19.201977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:06.530 [2024-07-26 10:44:19.247737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.530 [2024-07-26 10:44:19.247744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.098 10:44:19 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:07.098 10:44:19 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:34:07.098 10:44:19 compress_isal -- compress/compress.sh@74 -- # create_vols 00:34:07.098 10:44:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:07.098 10:44:19 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:10.382 10:44:23 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:10.382 10:44:23 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:10.640 10:44:23 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:10.640 [ 00:34:10.640 { 00:34:10.640 "name": "Nvme0n1", 00:34:10.640 "aliases": [ 00:34:10.640 "98a2a992-5b6e-4335-a893-c933e5ab7636" 00:34:10.640 ], 00:34:10.640 "product_name": "NVMe disk", 00:34:10.640 "block_size": 512, 00:34:10.640 "num_blocks": 3907029168, 00:34:10.640 "uuid": "98a2a992-5b6e-4335-a893-c933e5ab7636", 00:34:10.640 "assigned_rate_limits": { 00:34:10.640 "rw_ios_per_sec": 0, 00:34:10.640 "rw_mbytes_per_sec": 0, 00:34:10.640 "r_mbytes_per_sec": 0, 00:34:10.640 "w_mbytes_per_sec": 0 00:34:10.640 }, 00:34:10.640 "claimed": false, 00:34:10.640 "zoned": false, 00:34:10.640 "supported_io_types": { 00:34:10.640 "read": true, 00:34:10.640 "write": true, 00:34:10.640 "unmap": true, 00:34:10.640 "flush": true, 00:34:10.640 "reset": true, 00:34:10.640 "nvme_admin": true, 00:34:10.640 "nvme_io": true, 00:34:10.640 "nvme_io_md": false, 00:34:10.640 "write_zeroes": true, 00:34:10.640 "zcopy": false, 00:34:10.640 "get_zone_info": false, 00:34:10.640 "zone_management": false, 00:34:10.640 "zone_append": false, 00:34:10.640 "compare": false, 00:34:10.640 "compare_and_write": false, 00:34:10.640 "abort": true, 00:34:10.640 "seek_hole": false, 00:34:10.640 "seek_data": false, 00:34:10.640 "copy": false, 00:34:10.640 "nvme_iov_md": false 00:34:10.640 }, 00:34:10.640 "driver_specific": { 00:34:10.640 "nvme": [ 00:34:10.641 { 00:34:10.641 "pci_address": "0000:d8:00.0", 00:34:10.641 "trid": { 00:34:10.641 "trtype": "PCIe", 00:34:10.641 "traddr": "0000:d8:00.0" 00:34:10.641 }, 00:34:10.641 "ctrlr_data": { 00:34:10.641 "cntlid": 0, 00:34:10.641 "vendor_id": "0x8086", 00:34:10.641 "model_number": "INTEL SSDPE2KX020T8", 00:34:10.641 "serial_number": "BTLJ125505KA2P0BGN", 00:34:10.641 "firmware_revision": "VDV10170", 00:34:10.641 "oacs": { 00:34:10.641 "security": 0, 00:34:10.641 "format": 1, 00:34:10.641 "firmware": 1, 00:34:10.641 "ns_manage": 1 00:34:10.641 }, 00:34:10.641 "multi_ctrlr": false, 00:34:10.641 "ana_reporting": false 00:34:10.641 }, 00:34:10.641 "vs": { 00:34:10.641 "nvme_version": "1.2" 00:34:10.641 }, 00:34:10.641 "ns_data": { 00:34:10.641 "id": 1, 00:34:10.641 "can_share": false 00:34:10.641 } 00:34:10.641 } 00:34:10.641 ], 00:34:10.641 "mp_policy": "active_passive" 00:34:10.641 } 00:34:10.641 } 00:34:10.641 ] 00:34:10.899 10:44:23 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:10.899 10:44:23 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:12.276 fd5cde73-5846-4494-8286-7b88f3c62f37 00:34:12.276 10:44:24 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:12.276 5ea52e95-35ec-4813-8e9d-1d94eff77eb8 00:34:12.276 10:44:25 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:12.276 10:44:25 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:12.535 10:44:25 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:12.793 [ 00:34:12.793 { 00:34:12.793 "name": "5ea52e95-35ec-4813-8e9d-1d94eff77eb8", 00:34:12.793 "aliases": [ 00:34:12.793 "lvs0/lv0" 00:34:12.793 ], 00:34:12.793 "product_name": "Logical Volume", 00:34:12.793 "block_size": 512, 00:34:12.793 "num_blocks": 204800, 00:34:12.793 "uuid": "5ea52e95-35ec-4813-8e9d-1d94eff77eb8", 00:34:12.793 "assigned_rate_limits": { 00:34:12.793 "rw_ios_per_sec": 0, 00:34:12.793 "rw_mbytes_per_sec": 0, 00:34:12.793 "r_mbytes_per_sec": 0, 00:34:12.793 "w_mbytes_per_sec": 0 00:34:12.793 }, 00:34:12.793 "claimed": false, 00:34:12.793 "zoned": false, 00:34:12.793 "supported_io_types": { 00:34:12.793 "read": true, 00:34:12.793 "write": true, 00:34:12.793 "unmap": true, 00:34:12.793 "flush": false, 00:34:12.793 "reset": true, 00:34:12.793 "nvme_admin": false, 00:34:12.793 "nvme_io": false, 00:34:12.793 "nvme_io_md": false, 00:34:12.793 "write_zeroes": true, 00:34:12.793 "zcopy": false, 00:34:12.793 "get_zone_info": false, 00:34:12.793 "zone_management": false, 00:34:12.793 "zone_append": false, 00:34:12.793 "compare": false, 00:34:12.793 "compare_and_write": false, 00:34:12.793 "abort": false, 00:34:12.793 "seek_hole": true, 00:34:12.793 "seek_data": true, 00:34:12.793 "copy": false, 00:34:12.793 "nvme_iov_md": false 00:34:12.793 }, 00:34:12.793 "driver_specific": { 00:34:12.793 "lvol": { 00:34:12.793 "lvol_store_uuid": "fd5cde73-5846-4494-8286-7b88f3c62f37", 00:34:12.793 "base_bdev": "Nvme0n1", 00:34:12.793 "thin_provision": true, 00:34:12.793 "num_allocated_clusters": 0, 00:34:12.793 "snapshot": false, 00:34:12.793 "clone": false, 00:34:12.793 "esnap_clone": false 00:34:12.793 } 00:34:12.793 } 00:34:12.793 } 00:34:12.793 ] 00:34:12.793 10:44:25 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:12.793 10:44:25 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:12.793 10:44:25 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:12.793 [2024-07-26 10:44:25.685823] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:12.793 COMP_lvs0/lv0 00:34:13.052 10:44:25 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:13.052 10:44:25 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:34:13.052 10:44:25 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:13.052 10:44:25 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:13.052 10:44:25 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:13.053 10:44:25 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:13.053 10:44:25 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:13.053 10:44:25 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:13.311 [ 00:34:13.311 { 00:34:13.311 "name": "COMP_lvs0/lv0", 00:34:13.311 "aliases": [ 00:34:13.311 "681ef56e-e349-55c1-8dc2-a9dcfb4d6c30" 00:34:13.311 ], 00:34:13.311 "product_name": "compress", 00:34:13.311 "block_size": 512, 00:34:13.311 "num_blocks": 200704, 00:34:13.311 "uuid": "681ef56e-e349-55c1-8dc2-a9dcfb4d6c30", 00:34:13.311 "assigned_rate_limits": { 00:34:13.311 "rw_ios_per_sec": 0, 00:34:13.311 "rw_mbytes_per_sec": 0, 00:34:13.311 "r_mbytes_per_sec": 0, 00:34:13.311 "w_mbytes_per_sec": 0 00:34:13.311 }, 00:34:13.311 "claimed": false, 00:34:13.311 "zoned": false, 00:34:13.311 "supported_io_types": { 00:34:13.311 "read": true, 00:34:13.311 "write": true, 00:34:13.311 "unmap": false, 00:34:13.311 "flush": false, 00:34:13.311 "reset": false, 00:34:13.311 "nvme_admin": false, 00:34:13.311 "nvme_io": false, 00:34:13.311 "nvme_io_md": false, 00:34:13.311 "write_zeroes": true, 00:34:13.311 "zcopy": false, 00:34:13.311 "get_zone_info": false, 00:34:13.311 "zone_management": false, 00:34:13.311 "zone_append": false, 00:34:13.311 "compare": false, 00:34:13.311 "compare_and_write": false, 00:34:13.311 "abort": false, 00:34:13.311 "seek_hole": false, 00:34:13.311 "seek_data": false, 00:34:13.311 "copy": false, 00:34:13.311 "nvme_iov_md": false 00:34:13.311 }, 00:34:13.311 "driver_specific": { 00:34:13.311 "compress": { 00:34:13.311 "name": "COMP_lvs0/lv0", 00:34:13.311 "base_bdev_name": "5ea52e95-35ec-4813-8e9d-1d94eff77eb8", 00:34:13.311 "pm_path": "/tmp/pmem/243cfc77-36a9-4211-91cc-595d27ad91ae" 00:34:13.311 } 00:34:13.311 } 00:34:13.311 } 00:34:13.311 ] 00:34:13.311 10:44:26 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:13.311 10:44:26 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:13.570 Running I/O for 30 seconds... 00:34:45.688 00:34:45.688 Latency(us) 00:34:45.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.688 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:34:45.688 Verification LBA range: start 0x0 length 0xc40 00:34:45.688 COMP_lvs0/lv0 : 30.01 1678.23 26.22 0.00 0.00 37936.63 203.98 44459.62 00:34:45.688 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:34:45.688 Verification LBA range: start 0xc40 length 0xc40 00:34:45.688 COMP_lvs0/lv0 : 30.01 4969.10 77.64 0.00 0.00 12779.26 465.31 19608.37 00:34:45.688 =================================================================================================================== 00:34:45.688 Total : 6647.33 103.86 0.00 0.00 19131.03 203.98 44459.62 00:34:45.688 0 00:34:45.688 10:44:56 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:34:45.688 10:44:56 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:34:45.688 10:44:56 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:34:45.688 10:44:56 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:45.688 10:44:56 compress_isal -- compress/compress.sh@78 -- # killprocess 3576408 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3576408 ']' 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3576408 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@955 -- # uname 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3576408 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:45.688 10:44:56 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3576408' 00:34:45.688 killing process with pid 3576408 00:34:45.689 10:44:56 compress_isal -- common/autotest_common.sh@969 -- # kill 3576408 00:34:45.689 Received shutdown signal, test time was about 30.000000 seconds 00:34:45.689 00:34:45.689 Latency(us) 00:34:45.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.689 =================================================================================================================== 00:34:45.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.689 10:44:56 compress_isal -- common/autotest_common.sh@974 -- # wait 3576408 00:34:46.626 10:44:59 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:34:46.626 10:44:59 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:34:46.626 10:44:59 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:34:46.626 10:44:59 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.626 10:44:59 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.626 10:44:59 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:46.626 Cannot find device "nvmf_tgt_br" 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@155 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:46.626 Cannot find device "nvmf_tgt_br2" 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@156 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:46.626 Cannot find device "nvmf_tgt_br" 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@158 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:46.626 Cannot find device "nvmf_tgt_br2" 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@159 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:46.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@162 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:46.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@163 -- # true 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:46.626 10:44:59 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:46.627 10:44:59 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:46.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:34:46.886 00:34:46.886 --- 10.0.0.2 ping statistics --- 00:34:46.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.886 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:46.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:46.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:34:46.886 00:34:46.886 --- 10.0.0.3 ping statistics --- 00:34:46.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.886 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:46.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:34:46.886 00:34:46.886 --- 10.0.0.1 ping statistics --- 00:34:46.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.886 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@433 -- # return 0 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:46.886 10:44:59 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:47.146 10:44:59 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:34:47.146 10:44:59 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:34:47.146 10:44:59 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=3583324 00:34:47.146 10:44:59 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:34:47.146 10:44:59 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 3583324 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3583324 ']' 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.146 10:44:59 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:34:47.146 [2024-07-26 10:44:59.867502] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:34:47.146 [2024-07-26 10:44:59.867559] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:47.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.146 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:47.146 [2024-07-26 10:45:00.012115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.405 [2024-07-26 10:45:00.057692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.405 [2024-07-26 10:45:00.057734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.405 [2024-07-26 10:45:00.057748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.405 [2024-07-26 10:45:00.057760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.405 [2024-07-26 10:45:00.057770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.405 [2024-07-26 10:45:00.057832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.406 [2024-07-26 10:45:00.057926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:47.406 [2024-07-26 10:45:00.057929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.973 10:45:00 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:47.973 10:45:00 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:34:47.973 10:45:00 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:47.973 10:45:00 compress_isal -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:47.973 10:45:00 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:34:47.973 10:45:00 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.973 10:45:00 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:47.973 10:45:00 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:34:48.232 [2024-07-26 10:45:00.960633] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.232 10:45:00 compress_isal -- compress/compress.sh@102 -- # create_vols 00:34:48.232 10:45:00 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:48.232 10:45:00 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:51.520 10:45:04 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:51.520 10:45:04 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:51.778 [ 00:34:51.778 { 00:34:51.778 "name": "Nvme0n1", 00:34:51.778 "aliases": [ 00:34:51.778 "15ed23de-1bbd-4c93-855c-d838c72ab775" 00:34:51.778 ], 00:34:51.778 "product_name": "NVMe disk", 00:34:51.778 "block_size": 512, 00:34:51.778 "num_blocks": 3907029168, 00:34:51.778 "uuid": "15ed23de-1bbd-4c93-855c-d838c72ab775", 00:34:51.778 "assigned_rate_limits": { 00:34:51.778 "rw_ios_per_sec": 0, 00:34:51.778 "rw_mbytes_per_sec": 0, 00:34:51.778 "r_mbytes_per_sec": 0, 00:34:51.778 "w_mbytes_per_sec": 0 00:34:51.778 }, 00:34:51.778 "claimed": false, 00:34:51.778 "zoned": false, 00:34:51.778 "supported_io_types": { 00:34:51.778 "read": true, 00:34:51.778 "write": true, 00:34:51.778 "unmap": true, 00:34:51.778 "flush": true, 00:34:51.778 "reset": true, 00:34:51.778 "nvme_admin": true, 00:34:51.778 "nvme_io": true, 00:34:51.778 "nvme_io_md": false, 00:34:51.778 "write_zeroes": true, 00:34:51.778 "zcopy": false, 00:34:51.778 "get_zone_info": false, 00:34:51.778 "zone_management": false, 00:34:51.778 "zone_append": false, 00:34:51.778 "compare": false, 00:34:51.778 "compare_and_write": false, 00:34:51.778 "abort": true, 00:34:51.778 "seek_hole": false, 00:34:51.778 "seek_data": false, 00:34:51.778 "copy": false, 00:34:51.778 "nvme_iov_md": false 00:34:51.778 }, 00:34:51.778 "driver_specific": { 00:34:51.778 "nvme": [ 00:34:51.778 { 00:34:51.778 "pci_address": "0000:d8:00.0", 00:34:51.778 "trid": { 00:34:51.778 "trtype": "PCIe", 00:34:51.778 "traddr": "0000:d8:00.0" 00:34:51.779 }, 00:34:51.779 "ctrlr_data": { 00:34:51.779 "cntlid": 0, 00:34:51.779 "vendor_id": "0x8086", 00:34:51.779 "model_number": "INTEL SSDPE2KX020T8", 00:34:51.779 "serial_number": "BTLJ125505KA2P0BGN", 00:34:51.779 "firmware_revision": "VDV10170", 00:34:51.779 "oacs": { 00:34:51.779 "security": 0, 00:34:51.779 "format": 1, 00:34:51.779 "firmware": 1, 00:34:51.779 "ns_manage": 1 00:34:51.779 }, 00:34:51.779 "multi_ctrlr": false, 00:34:51.779 "ana_reporting": false 00:34:51.779 }, 00:34:51.779 "vs": { 00:34:51.779 "nvme_version": "1.2" 00:34:51.779 }, 00:34:51.779 "ns_data": { 00:34:51.779 "id": 1, 00:34:51.779 "can_share": false 00:34:51.779 } 00:34:51.779 } 00:34:51.779 ], 00:34:51.779 "mp_policy": "active_passive" 00:34:51.779 } 00:34:51.779 } 00:34:51.779 ] 00:34:51.779 10:45:04 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:51.779 10:45:04 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:53.156 b000c80e-b307-4e27-a8c6-8a70e802e5ce 00:34:53.156 10:45:05 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:53.156 3b3173eb-7bed-430b-ac8e-ea02eeddcac4 00:34:53.156 10:45:05 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:53.156 10:45:05 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:53.415 10:45:06 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:53.674 [ 00:34:53.674 { 00:34:53.674 "name": "3b3173eb-7bed-430b-ac8e-ea02eeddcac4", 00:34:53.674 "aliases": [ 00:34:53.674 "lvs0/lv0" 00:34:53.674 ], 00:34:53.674 "product_name": "Logical Volume", 00:34:53.674 "block_size": 512, 00:34:53.674 "num_blocks": 204800, 00:34:53.674 "uuid": "3b3173eb-7bed-430b-ac8e-ea02eeddcac4", 00:34:53.674 "assigned_rate_limits": { 00:34:53.674 "rw_ios_per_sec": 0, 00:34:53.674 "rw_mbytes_per_sec": 0, 00:34:53.674 "r_mbytes_per_sec": 0, 00:34:53.674 "w_mbytes_per_sec": 0 00:34:53.674 }, 00:34:53.674 "claimed": false, 00:34:53.674 "zoned": false, 00:34:53.674 "supported_io_types": { 00:34:53.674 "read": true, 00:34:53.674 "write": true, 00:34:53.674 "unmap": true, 00:34:53.674 "flush": false, 00:34:53.674 "reset": true, 00:34:53.674 "nvme_admin": false, 00:34:53.674 "nvme_io": false, 00:34:53.674 "nvme_io_md": false, 00:34:53.674 "write_zeroes": true, 00:34:53.674 "zcopy": false, 00:34:53.674 "get_zone_info": false, 00:34:53.674 "zone_management": false, 00:34:53.674 "zone_append": false, 00:34:53.674 "compare": false, 00:34:53.674 "compare_and_write": false, 00:34:53.674 "abort": false, 00:34:53.674 "seek_hole": true, 00:34:53.674 "seek_data": true, 00:34:53.674 "copy": false, 00:34:53.674 "nvme_iov_md": false 00:34:53.674 }, 00:34:53.674 "driver_specific": { 00:34:53.674 "lvol": { 00:34:53.674 "lvol_store_uuid": "b000c80e-b307-4e27-a8c6-8a70e802e5ce", 00:34:53.674 "base_bdev": "Nvme0n1", 00:34:53.674 "thin_provision": true, 00:34:53.674 "num_allocated_clusters": 0, 00:34:53.674 "snapshot": false, 00:34:53.674 "clone": false, 00:34:53.674 "esnap_clone": false 00:34:53.675 } 00:34:53.675 } 00:34:53.675 } 00:34:53.675 ] 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:53.675 10:45:06 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:53.675 10:45:06 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:53.675 [2024-07-26 10:45:06.496199] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:53.675 COMP_lvs0/lv0 00:34:53.675 10:45:06 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@901 -- # local i 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:53.675 10:45:06 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:53.934 10:45:06 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:53.934 [ 00:34:53.934 { 00:34:53.934 "name": "COMP_lvs0/lv0", 00:34:53.934 "aliases": [ 00:34:53.934 "ed6e2f10-4f53-58df-bf8b-9ff706e68e85" 00:34:53.934 ], 00:34:53.934 "product_name": "compress", 00:34:53.934 "block_size": 512, 00:34:53.934 "num_blocks": 200704, 00:34:53.934 "uuid": "ed6e2f10-4f53-58df-bf8b-9ff706e68e85", 00:34:53.934 "assigned_rate_limits": { 00:34:53.934 "rw_ios_per_sec": 0, 00:34:53.934 "rw_mbytes_per_sec": 0, 00:34:53.934 "r_mbytes_per_sec": 0, 00:34:53.934 "w_mbytes_per_sec": 0 00:34:53.934 }, 00:34:53.934 "claimed": false, 00:34:53.934 "zoned": false, 00:34:53.934 "supported_io_types": { 00:34:53.934 "read": true, 00:34:53.934 "write": true, 00:34:53.934 "unmap": false, 00:34:53.934 "flush": false, 00:34:53.934 "reset": false, 00:34:53.934 "nvme_admin": false, 00:34:53.934 "nvme_io": false, 00:34:53.934 "nvme_io_md": false, 00:34:53.934 "write_zeroes": true, 00:34:53.934 "zcopy": false, 00:34:53.934 "get_zone_info": false, 00:34:53.934 "zone_management": false, 00:34:53.934 "zone_append": false, 00:34:53.934 "compare": false, 00:34:53.934 "compare_and_write": false, 00:34:53.934 "abort": false, 00:34:53.934 "seek_hole": false, 00:34:53.934 "seek_data": false, 00:34:53.934 "copy": false, 00:34:53.934 "nvme_iov_md": false 00:34:53.934 }, 00:34:53.934 "driver_specific": { 00:34:53.934 "compress": { 00:34:53.934 "name": "COMP_lvs0/lv0", 00:34:53.934 "base_bdev_name": "3b3173eb-7bed-430b-ac8e-ea02eeddcac4", 00:34:53.934 "pm_path": "/tmp/pmem/ab945d3e-43e9-4a50-8d8c-45f9a449b921" 00:34:53.934 } 00:34:53.934 } 00:34:53.934 } 00:34:53.934 ] 00:34:54.193 10:45:06 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:34:54.193 10:45:06 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:54.193 10:45:06 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:34:54.452 10:45:07 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:54.710 [2024-07-26 10:45:07.376438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.710 10:45:07 compress_isal -- compress/compress.sh@109 -- # perf_pid=3585142 00:34:54.710 10:45:07 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:34:54.710 10:45:07 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:54.710 10:45:07 compress_isal -- compress/compress.sh@113 -- # wait 3585142 00:34:54.969 [2024-07-26 10:45:07.630126] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:27.119 Initializing NVMe Controllers 00:35:27.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:27.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:27.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:27.119 Initialization complete. Launching workers. 00:35:27.119 ======================================================== 00:35:27.119 Latency(us) 00:35:27.119 Device Information : IOPS MiB/s Average min max 00:35:27.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 5019.77 19.61 12751.92 1102.66 36161.73 00:35:27.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 3152.37 12.31 20306.24 2115.23 41793.12 00:35:27.119 ======================================================== 00:35:27.119 Total : 8172.13 31.92 15665.97 1102.66 41793.12 00:35:27.119 00:35:27.119 10:45:37 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:35:27.119 10:45:37 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:27.119 10:45:37 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:27.119 10:45:38 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:35:27.119 10:45:38 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@117 -- # sync 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@120 -- # set +e 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:27.119 rmmod nvme_tcp 00:35:27.119 rmmod nvme_fabrics 00:35:27.119 rmmod nvme_keyring 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@124 -- # set -e 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@125 -- # return 0 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@489 -- # '[' -n 3583324 ']' 00:35:27.119 10:45:38 compress_isal -- nvmf/common.sh@490 -- # killprocess 3583324 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3583324 ']' 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3583324 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@955 -- # uname 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3583324 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3583324' 00:35:27.119 killing process with pid 3583324 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@969 -- # kill 3583324 00:35:27.119 10:45:38 compress_isal -- common/autotest_common.sh@974 -- # wait 3583324 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.687 10:45:40 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.687 10:45:40 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:27.687 10:45:40 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:27.687 10:45:40 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:35:27.687 00:35:27.687 real 2m11.279s 00:35:27.687 user 6m1.558s 00:35:27.687 sys 0m20.175s 00:35:27.687 10:45:40 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:27.687 10:45:40 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:35:27.687 ************************************ 00:35:27.687 END TEST compress_isal 00:35:27.687 ************************************ 00:35:27.947 10:45:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:35:27.947 10:45:40 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:35:27.947 10:45:40 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:35:27.947 10:45:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:27.947 10:45:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:27.947 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:35:27.947 ************************************ 00:35:27.947 START TEST blockdev_crypto_aesni 00:35:27.947 ************************************ 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:35:27.947 * Looking for test storage... 00:35:27.947 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3590534 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:35:27.947 10:45:40 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 3590534 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 3590534 ']' 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.947 10:45:40 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:28.207 [2024-07-26 10:45:40.871124] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:35:28.207 [2024-07-26 10:45:40.871188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590534 ] 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:28.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:28.207 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:28.207 [2024-07-26 10:45:41.007653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.207 [2024-07-26 10:45:41.051442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.145 10:45:41 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:29.145 10:45:41 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:35:29.145 10:45:41 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:35:29.145 10:45:41 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:35:29.145 10:45:41 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:35:29.145 10:45:41 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.145 10:45:41 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:29.145 [2024-07-26 10:45:41.781679] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:35:29.145 [2024-07-26 10:45:41.789710] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:29.145 [2024-07-26 10:45:41.797725] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:29.145 [2024-07-26 10:45:41.868100] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:35:31.681 true 00:35:31.681 true 00:35:31.681 true 00:35:31.681 true 00:35:31.681 Malloc0 00:35:31.681 Malloc1 00:35:31.681 Malloc2 00:35:31.681 Malloc3 00:35:31.681 [2024-07-26 10:45:44.342555] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:35:31.681 crypto_ram 00:35:31.681 [2024-07-26 10:45:44.350572] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:35:31.681 crypto_ram2 00:35:31.681 [2024-07-26 10:45:44.358594] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:35:31.681 crypto_ram3 00:35:31.681 [2024-07-26 10:45:44.366616] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:35:31.681 crypto_ram4 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.681 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.681 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:35:31.681 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:31.681 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.681 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:31.682 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:35:31.682 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2ca0b693-7caf-534b-881a-65f592a724ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ca0b693-7caf-534b-881a-65f592a724ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "766c74a1-4cca-5faa-91ee-6945bd563362"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "766c74a1-4cca-5faa-91ee-6945bd563362",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ac9adc1-80d8-50f7-9e97-36139d84746e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ac9adc1-80d8-50f7-9e97-36139d84746e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:35:31.942 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:35:31.942 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:35:31.942 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:35:31.942 10:45:44 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 3590534 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 3590534 ']' 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 3590534 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3590534 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3590534' 00:35:31.942 killing process with pid 3590534 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 3590534 00:35:31.942 10:45:44 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 3590534 00:35:32.201 10:45:45 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:32.201 10:45:45 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:35:32.201 10:45:45 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:32.201 10:45:45 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.201 10:45:45 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:32.461 ************************************ 00:35:32.461 START TEST bdev_hello_world 00:35:32.461 ************************************ 00:35:32.461 10:45:45 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:35:32.461 [2024-07-26 10:45:45.195238] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:35:32.461 [2024-07-26 10:45:45.195293] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591313 ] 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.461 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.462 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:32.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.462 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:32.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.462 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:32.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.462 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:32.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:32.462 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:32.462 [2024-07-26 10:45:45.329000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.721 [2024-07-26 10:45:45.372687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.721 [2024-07-26 10:45:45.394018] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:35:32.721 [2024-07-26 10:45:45.402045] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:32.721 [2024-07-26 10:45:45.410064] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:32.721 [2024-07-26 10:45:45.524837] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:35:35.258 [2024-07-26 10:45:47.855509] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:35:35.258 [2024-07-26 10:45:47.855578] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:35.258 [2024-07-26 10:45:47.855593] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:35.258 [2024-07-26 10:45:47.863527] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:35:35.258 [2024-07-26 10:45:47.863547] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:35.258 [2024-07-26 10:45:47.863558] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:35.258 [2024-07-26 10:45:47.871547] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:35:35.258 [2024-07-26 10:45:47.871565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:35.258 [2024-07-26 10:45:47.871576] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:35.258 [2024-07-26 10:45:47.879566] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:35:35.258 [2024-07-26 10:45:47.879584] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:35.258 [2024-07-26 10:45:47.879594] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:35.258 [2024-07-26 10:45:47.950826] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:35.258 [2024-07-26 10:45:47.950866] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:35:35.258 [2024-07-26 10:45:47.950883] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:35.258 [2024-07-26 10:45:47.952049] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:35.258 [2024-07-26 10:45:47.952126] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:35.258 [2024-07-26 10:45:47.952151] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:35.258 [2024-07-26 10:45:47.952193] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:35.258 00:35:35.258 [2024-07-26 10:45:47.952211] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:35.518 00:35:35.518 real 0m3.108s 00:35:35.518 user 0m2.582s 00:35:35.518 sys 0m0.478s 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:35.518 ************************************ 00:35:35.518 END TEST bdev_hello_world 00:35:35.518 ************************************ 00:35:35.518 10:45:48 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:35:35.518 10:45:48 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:35.518 10:45:48 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:35.518 10:45:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:35.518 ************************************ 00:35:35.518 START TEST bdev_bounds 00:35:35.518 ************************************ 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3591833 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3591833' 00:35:35.518 Process bdevio pid: 3591833 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3591833 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3591833 ']' 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.518 10:45:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:35.518 [2024-07-26 10:45:48.377838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:35:35.518 [2024-07-26 10:45:48.377895] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591833 ] 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:35.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:35.778 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:35.778 [2024-07-26 10:45:48.511011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.778 [2024-07-26 10:45:48.558419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.778 [2024-07-26 10:45:48.558519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.779 [2024-07-26 10:45:48.558523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.779 [2024-07-26 10:45:48.579789] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:35:35.779 [2024-07-26 10:45:48.587815] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:35.779 [2024-07-26 10:45:48.595836] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:36.038 [2024-07-26 10:45:48.692014] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:35:38.573 [2024-07-26 10:45:51.017573] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:35:38.573 [2024-07-26 10:45:51.017653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:38.573 [2024-07-26 10:45:51.017667] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.573 [2024-07-26 10:45:51.025604] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:35:38.573 [2024-07-26 10:45:51.025623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:38.573 [2024-07-26 10:45:51.025635] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.573 [2024-07-26 10:45:51.033616] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:35:38.573 [2024-07-26 10:45:51.033633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:38.573 [2024-07-26 10:45:51.033644] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.573 [2024-07-26 10:45:51.041639] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:35:38.573 [2024-07-26 10:45:51.041660] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:38.573 [2024-07-26 10:45:51.041670] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:38.573 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:38.573 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:35:38.573 10:45:51 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:38.573 I/O targets: 00:35:38.573 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:35:38.573 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:35:38.573 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:35:38.573 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:35:38.573 00:35:38.573 00:35:38.573 CUnit - A unit testing framework for C - Version 2.1-3 00:35:38.573 http://cunit.sourceforge.net/ 00:35:38.573 00:35:38.573 00:35:38.573 Suite: bdevio tests on: crypto_ram4 00:35:38.573 Test: blockdev write read block ...passed 00:35:38.573 Test: blockdev write zeroes read block ...passed 00:35:38.573 Test: blockdev write zeroes read no split ...passed 00:35:38.573 Test: blockdev write zeroes read split ...passed 00:35:38.573 Test: blockdev write zeroes read split partial ...passed 00:35:38.573 Test: blockdev reset ...passed 00:35:38.573 Test: blockdev write read 8 blocks ...passed 00:35:38.573 Test: blockdev write read size > 128k ...passed 00:35:38.573 Test: blockdev write read invalid size ...passed 00:35:38.573 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:38.573 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:38.573 Test: blockdev write read max offset ...passed 00:35:38.573 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:38.573 Test: blockdev writev readv 8 blocks ...passed 00:35:38.573 Test: blockdev writev readv 30 x 1block ...passed 00:35:38.573 Test: blockdev writev readv block ...passed 00:35:38.573 Test: blockdev writev readv size > 128k ...passed 00:35:38.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:38.573 Test: blockdev comparev and writev ...passed 00:35:38.573 Test: blockdev nvme passthru rw ...passed 00:35:38.573 Test: blockdev nvme passthru vendor specific ...passed 00:35:38.573 Test: blockdev nvme admin passthru ...passed 00:35:38.573 Test: blockdev copy ...passed 00:35:38.573 Suite: bdevio tests on: crypto_ram3 00:35:38.573 Test: blockdev write read block ...passed 00:35:38.573 Test: blockdev write zeroes read block ...passed 00:35:38.573 Test: blockdev write zeroes read no split ...passed 00:35:38.573 Test: blockdev write zeroes read split ...passed 00:35:38.573 Test: blockdev write zeroes read split partial ...passed 00:35:38.573 Test: blockdev reset ...passed 00:35:38.573 Test: blockdev write read 8 blocks ...passed 00:35:38.573 Test: blockdev write read size > 128k ...passed 00:35:38.573 Test: blockdev write read invalid size ...passed 00:35:38.573 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:38.573 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:38.573 Test: blockdev write read max offset ...passed 00:35:38.573 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:38.573 Test: blockdev writev readv 8 blocks ...passed 00:35:38.573 Test: blockdev writev readv 30 x 1block ...passed 00:35:38.573 Test: blockdev writev readv block ...passed 00:35:38.573 Test: blockdev writev readv size > 128k ...passed 00:35:38.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:38.573 Test: blockdev comparev and writev ...passed 00:35:38.573 Test: blockdev nvme passthru rw ...passed 00:35:38.573 Test: blockdev nvme passthru vendor specific ...passed 00:35:38.573 Test: blockdev nvme admin passthru ...passed 00:35:38.573 Test: blockdev copy ...passed 00:35:38.573 Suite: bdevio tests on: crypto_ram2 00:35:38.573 Test: blockdev write read block ...passed 00:35:38.573 Test: blockdev write zeroes read block ...passed 00:35:38.573 Test: blockdev write zeroes read no split ...passed 00:35:38.573 Test: blockdev write zeroes read split ...passed 00:35:38.573 Test: blockdev write zeroes read split partial ...passed 00:35:38.573 Test: blockdev reset ...passed 00:35:38.573 Test: blockdev write read 8 blocks ...passed 00:35:38.573 Test: blockdev write read size > 128k ...passed 00:35:38.573 Test: blockdev write read invalid size ...passed 00:35:38.573 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:38.573 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:38.573 Test: blockdev write read max offset ...passed 00:35:38.573 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:38.573 Test: blockdev writev readv 8 blocks ...passed 00:35:38.573 Test: blockdev writev readv 30 x 1block ...passed 00:35:38.573 Test: blockdev writev readv block ...passed 00:35:38.573 Test: blockdev writev readv size > 128k ...passed 00:35:38.573 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:38.574 Test: blockdev comparev and writev ...passed 00:35:38.574 Test: blockdev nvme passthru rw ...passed 00:35:38.574 Test: blockdev nvme passthru vendor specific ...passed 00:35:38.574 Test: blockdev nvme admin passthru ...passed 00:35:38.574 Test: blockdev copy ...passed 00:35:38.574 Suite: bdevio tests on: crypto_ram 00:35:38.574 Test: blockdev write read block ...passed 00:35:38.574 Test: blockdev write zeroes read block ...passed 00:35:38.574 Test: blockdev write zeroes read no split ...passed 00:35:38.574 Test: blockdev write zeroes read split ...passed 00:35:38.833 Test: blockdev write zeroes read split partial ...passed 00:35:38.833 Test: blockdev reset ...passed 00:35:38.833 Test: blockdev write read 8 blocks ...passed 00:35:38.833 Test: blockdev write read size > 128k ...passed 00:35:38.833 Test: blockdev write read invalid size ...passed 00:35:38.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:38.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:38.833 Test: blockdev write read max offset ...passed 00:35:38.833 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:38.833 Test: blockdev writev readv 8 blocks ...passed 00:35:38.833 Test: blockdev writev readv 30 x 1block ...passed 00:35:38.833 Test: blockdev writev readv block ...passed 00:35:38.833 Test: blockdev writev readv size > 128k ...passed 00:35:38.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:38.833 Test: blockdev comparev and writev ...passed 00:35:38.833 Test: blockdev nvme passthru rw ...passed 00:35:38.833 Test: blockdev nvme passthru vendor specific ...passed 00:35:38.833 Test: blockdev nvme admin passthru ...passed 00:35:38.833 Test: blockdev copy ...passed 00:35:38.833 00:35:38.833 Run Summary: Type Total Ran Passed Failed Inactive 00:35:38.833 suites 4 4 n/a 0 0 00:35:38.833 tests 92 92 92 0 0 00:35:38.833 asserts 520 520 520 0 n/a 00:35:38.833 00:35:38.833 Elapsed time = 0.491 seconds 00:35:38.833 0 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3591833 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3591833 ']' 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3591833 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3591833 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3591833' 00:35:38.833 killing process with pid 3591833 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3591833 00:35:38.833 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3591833 00:35:39.092 10:45:51 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:39.092 00:35:39.092 real 0m3.548s 00:35:39.092 user 0m9.910s 00:35:39.092 sys 0m0.690s 00:35:39.092 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:39.092 10:45:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:39.092 ************************************ 00:35:39.092 END TEST bdev_bounds 00:35:39.093 ************************************ 00:35:39.093 10:45:51 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:35:39.093 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:39.093 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:39.093 10:45:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:39.093 ************************************ 00:35:39.093 START TEST bdev_nbd 00:35:39.093 ************************************ 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3592444 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3592444 /var/tmp/spdk-nbd.sock 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3592444 ']' 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:39.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:39.093 10:45:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:39.352 [2024-07-26 10:45:52.019494] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:35:39.352 [2024-07-26 10:45:52.019550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:39.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:39.352 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:39.352 [2024-07-26 10:45:52.153660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.352 [2024-07-26 10:45:52.198712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.352 [2024-07-26 10:45:52.219935] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:35:39.352 [2024-07-26 10:45:52.227962] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:35:39.352 [2024-07-26 10:45:52.235980] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:35:39.612 [2024-07-26 10:45:52.344497] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:35:42.147 [2024-07-26 10:45:54.672337] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:35:42.147 [2024-07-26 10:45:54.672392] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:35:42.147 [2024-07-26 10:45:54.672406] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:42.147 [2024-07-26 10:45:54.680357] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:35:42.147 [2024-07-26 10:45:54.680375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:35:42.147 [2024-07-26 10:45:54.680386] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:42.147 [2024-07-26 10:45:54.688378] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:35:42.147 [2024-07-26 10:45:54.688395] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:35:42.147 [2024-07-26 10:45:54.688406] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:42.147 [2024-07-26 10:45:54.696398] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:35:42.147 [2024-07-26 10:45:54.696415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:35:42.147 [2024-07-26 10:45:54.696425] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:42.147 10:45:54 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:42.714 1+0 records in 00:35:42.714 1+0 records out 00:35:42.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262835 s, 15.6 MB/s 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:42.714 1+0 records in 00:35:42.714 1+0 records out 00:35:42.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308867 s, 13.3 MB/s 00:35:42.714 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:42.715 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:42.715 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:42.973 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:43.232 1+0 records in 00:35:43.232 1+0 records out 00:35:43.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314136 s, 13.0 MB/s 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:43.232 10:45:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:35:43.534 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:43.535 1+0 records in 00:35:43.535 1+0 records out 00:35:43.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355154 s, 11.5 MB/s 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd0", 00:35:43.535 "bdev_name": "crypto_ram" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd1", 00:35:43.535 "bdev_name": "crypto_ram2" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd2", 00:35:43.535 "bdev_name": "crypto_ram3" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd3", 00:35:43.535 "bdev_name": "crypto_ram4" 00:35:43.535 } 00:35:43.535 ]' 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd0", 00:35:43.535 "bdev_name": "crypto_ram" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd1", 00:35:43.535 "bdev_name": "crypto_ram2" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd2", 00:35:43.535 "bdev_name": "crypto_ram3" 00:35:43.535 }, 00:35:43.535 { 00:35:43.535 "nbd_device": "/dev/nbd3", 00:35:43.535 "bdev_name": "crypto_ram4" 00:35:43.535 } 00:35:43.535 ]' 00:35:43.535 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:43.793 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:43.794 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:44.052 10:45:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:44.311 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:44.570 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:44.829 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:35:45.088 /dev/nbd0 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.088 1+0 records in 00:35:45.088 1+0 records out 00:35:45.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281056 s, 14.6 MB/s 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:45.088 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.347 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:45.347 10:45:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:35:45.347 /dev/nbd1 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.347 1+0 records in 00:35:45.347 1+0 records out 00:35:45.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331216 s, 12.4 MB/s 00:35:45.347 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:35:45.605 /dev/nbd10 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:45.605 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.864 1+0 records in 00:35:45.864 1+0 records out 00:35:45.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327565 s, 12.5 MB/s 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:35:45.864 /dev/nbd11 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:45.864 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:46.123 1+0 records in 00:35:46.123 1+0 records out 00:35:46.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301729 s, 13.6 MB/s 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd0", 00:35:46.123 "bdev_name": "crypto_ram" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd1", 00:35:46.123 "bdev_name": "crypto_ram2" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd10", 00:35:46.123 "bdev_name": "crypto_ram3" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd11", 00:35:46.123 "bdev_name": "crypto_ram4" 00:35:46.123 } 00:35:46.123 ]' 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd0", 00:35:46.123 "bdev_name": "crypto_ram" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd1", 00:35:46.123 "bdev_name": "crypto_ram2" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd10", 00:35:46.123 "bdev_name": "crypto_ram3" 00:35:46.123 }, 00:35:46.123 { 00:35:46.123 "nbd_device": "/dev/nbd11", 00:35:46.123 "bdev_name": "crypto_ram4" 00:35:46.123 } 00:35:46.123 ]' 00:35:46.123 10:45:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:46.123 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:46.123 /dev/nbd1 00:35:46.123 /dev/nbd10 00:35:46.123 /dev/nbd11' 00:35:46.123 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:46.123 /dev/nbd1 00:35:46.123 /dev/nbd10 00:35:46.123 /dev/nbd11' 00:35:46.123 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:46.123 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:35:46.123 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:46.124 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:46.383 256+0 records in 00:35:46.383 256+0 records out 00:35:46.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105809 s, 99.1 MB/s 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:46.383 256+0 records in 00:35:46.383 256+0 records out 00:35:46.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0404066 s, 26.0 MB/s 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:46.383 256+0 records in 00:35:46.383 256+0 records out 00:35:46.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0597708 s, 17.5 MB/s 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:46.383 256+0 records in 00:35:46.383 256+0 records out 00:35:46.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.050992 s, 20.6 MB/s 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:46.383 256+0 records in 00:35:46.383 256+0 records out 00:35:46.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335769 s, 31.2 MB/s 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:46.383 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:46.642 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:46.901 10:45:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:47.160 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:47.420 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:35:47.989 10:46:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:48.248 malloc_lvol_verify 00:35:48.248 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:48.507 e7091f89-7ac3-4f5b-a63e-17c8eec43010 00:35:48.507 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:48.766 e669164f-2909-47c5-b6fa-ab1a87cc4a25 00:35:48.766 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:49.025 /dev/nbd0 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:35:49.025 mke2fs 1.46.5 (30-Dec-2021) 00:35:49.025 Discarding device blocks: 0/4096 done 00:35:49.025 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:49.025 00:35:49.025 Allocating group tables: 0/1 done 00:35:49.025 Writing inode tables: 0/1 done 00:35:49.025 Creating journal (1024 blocks): done 00:35:49.025 Writing superblocks and filesystem accounting information: 0/1 done 00:35:49.025 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:49.025 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3592444 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3592444 ']' 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3592444 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:49.284 10:46:01 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3592444 00:35:49.284 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:49.284 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:49.284 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3592444' 00:35:49.284 killing process with pid 3592444 00:35:49.284 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3592444 00:35:49.284 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3592444 00:35:49.543 10:46:02 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:35:49.543 00:35:49.543 real 0m10.375s 00:35:49.543 user 0m13.628s 00:35:49.543 sys 0m4.056s 00:35:49.543 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.543 10:46:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:49.543 ************************************ 00:35:49.543 END TEST bdev_nbd 00:35:49.543 ************************************ 00:35:49.543 10:46:02 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:35:49.544 10:46:02 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:35:49.544 10:46:02 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.544 10:46:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:35:49.544 ************************************ 00:35:49.544 START TEST bdev_fio 00:35:49.544 ************************************ 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:35:49.544 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:35:49.544 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:49.803 ************************************ 00:35:49.803 START TEST bdev_fio_rw_verify 00:35:49.803 ************************************ 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.803 10:46:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:50.062 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:50.062 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:50.062 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:50.062 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:50.062 fio-3.35 00:35:50.062 Starting 4 threads 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:50.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.321 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:50.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:50.322 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:05.205 00:36:05.205 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3594898: Fri Jul 26 10:46:15 2024 00:36:05.205 read: IOPS=25.2k, BW=98.5MiB/s (103MB/s)(985MiB/10001msec) 00:36:05.205 slat (usec): min=15, max=446, avg=54.14, stdev=36.22 00:36:05.205 clat (usec): min=11, max=2291, avg=282.70, stdev=197.77 00:36:05.205 lat (usec): min=39, max=2468, avg=336.84, stdev=219.33 00:36:05.205 clat percentiles (usec): 00:36:05.205 | 50.000th=[ 239], 99.000th=[ 1037], 99.900th=[ 1221], 99.990th=[ 1434], 00:36:05.205 | 99.999th=[ 2114] 00:36:05.205 write: IOPS=27.6k, BW=108MiB/s (113MB/s)(1053MiB/9749msec); 0 zone resets 00:36:05.205 slat (usec): min=21, max=341, avg=64.46, stdev=34.82 00:36:05.205 clat (usec): min=16, max=1771, avg=340.24, stdev=225.38 00:36:05.205 lat (usec): min=43, max=1916, avg=404.70, stdev=245.62 00:36:05.205 clat percentiles (usec): 00:36:05.205 | 50.000th=[ 302], 99.000th=[ 1221], 99.900th=[ 1434], 99.990th=[ 1565], 00:36:05.205 | 99.999th=[ 1696] 00:36:05.205 bw ( KiB/s): min=92112, max=145528, per=98.03%, avg=108413.05, stdev=3101.29, samples=76 00:36:05.205 iops : min=23028, max=36382, avg=27103.26, stdev=775.32, samples=76 00:36:05.205 lat (usec) : 20=0.01%, 50=0.01%, 100=9.10%, 250=36.82%, 500=40.52% 00:36:05.205 lat (usec) : 750=8.45%, 1000=3.28% 00:36:05.205 lat (msec) : 2=1.82%, 4=0.01% 00:36:05.205 cpu : usr=99.65%, sys=0.00%, ctx=60, majf=0, minf=261 00:36:05.205 IO depths : 1=10.1%, 2=25.5%, 4=51.2%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.206 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.206 issued rwts: total=252270,269530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.206 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:05.206 00:36:05.206 Run status group 0 (all jobs): 00:36:05.206 READ: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=985MiB (1033MB), run=10001-10001msec 00:36:05.206 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=1053MiB (1104MB), run=9749-9749msec 00:36:05.206 00:36:05.206 real 0m13.534s 00:36:05.206 user 0m53.926s 00:36:05.206 sys 0m0.603s 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:36:05.206 ************************************ 00:36:05.206 END TEST bdev_fio_rw_verify 00:36:05.206 ************************************ 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2ca0b693-7caf-534b-881a-65f592a724ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ca0b693-7caf-534b-881a-65f592a724ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "766c74a1-4cca-5faa-91ee-6945bd563362"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "766c74a1-4cca-5faa-91ee-6945bd563362",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ac9adc1-80d8-50f7-9e97-36139d84746e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ac9adc1-80d8-50f7-9e97-36139d84746e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:36:05.206 crypto_ram2 00:36:05.206 crypto_ram3 00:36:05.206 crypto_ram4 ]] 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "2ca0b693-7caf-534b-881a-65f592a724ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2ca0b693-7caf-534b-881a-65f592a724ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "766c74a1-4cca-5faa-91ee-6945bd563362"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "766c74a1-4cca-5faa-91ee-6945bd563362",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "5ac9adc1-80d8-50f7-9e97-36139d84746e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "5ac9adc1-80d8-50f7-9e97-36139d84746e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ae73b0e2-5550-5edc-ab0f-8abb3bf7d3f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:36:05.206 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:05.207 ************************************ 00:36:05.207 START TEST bdev_fio_trim 00:36:05.207 ************************************ 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:05.207 10:46:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:05.207 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:05.207 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:05.207 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:05.207 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:05.207 fio-3.35 00:36:05.207 Starting 4 threads 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:05.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:05.207 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:17.486 00:36:17.486 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3597411: Fri Jul 26 10:46:29 2024 00:36:17.486 write: IOPS=36.4k, BW=142MiB/s (149MB/s)(1421MiB/10001msec); 0 zone resets 00:36:17.486 slat (usec): min=13, max=1846, avg=60.96, stdev=32.35 00:36:17.486 clat (usec): min=58, max=2604, avg=505.19, stdev=168.22 00:36:17.486 lat (usec): min=96, max=2653, avg=566.15, stdev=162.29 00:36:17.486 clat percentiles (usec): 00:36:17.486 | 50.000th=[ 570], 99.000th=[ 783], 99.900th=[ 898], 99.990th=[ 1172], 00:36:17.486 | 99.999th=[ 2507] 00:36:17.486 bw ( KiB/s): min=133904, max=215435, per=100.00%, avg=146058.68, stdev=6365.48, samples=76 00:36:17.486 iops : min=33476, max=53858, avg=36514.53, stdev=1591.34, samples=76 00:36:17.486 trim: IOPS=36.4k, BW=142MiB/s (149MB/s)(1421MiB/10001msec); 0 zone resets 00:36:17.486 slat (usec): min=4, max=425, avg=16.51, stdev= 5.42 00:36:17.486 clat (usec): min=27, max=1959, avg=157.81, stdev=127.07 00:36:17.486 lat (usec): min=32, max=1976, avg=174.31, stdev=128.05 00:36:17.486 clat percentiles (usec): 00:36:17.486 | 50.000th=[ 91], 99.000th=[ 537], 99.900th=[ 635], 99.990th=[ 709], 00:36:17.486 | 99.999th=[ 775] 00:36:17.486 bw ( KiB/s): min=133888, max=215547, per=100.00%, avg=146065.00, stdev=6370.98, samples=76 00:36:17.486 iops : min=33472, max=53886, avg=36516.21, stdev=1592.70, samples=76 00:36:17.486 lat (usec) : 50=3.01%, 100=25.58%, 250=15.27%, 500=25.01%, 750=29.97% 00:36:17.486 lat (usec) : 1000=1.15% 00:36:17.486 lat (msec) : 2=0.01%, 4=0.01% 00:36:17.486 cpu : usr=99.62%, sys=0.00%, ctx=97, majf=0, minf=163 00:36:17.486 IO depths : 1=0.1%, 2=11.7%, 4=52.9%, 8=35.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.486 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.486 issued rwts: total=0,363873,363874,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.486 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:17.486 00:36:17.486 Run status group 0 (all jobs): 00:36:17.486 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=1421MiB (1490MB), run=10001-10001msec 00:36:17.486 TRIM: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=1421MiB (1490MB), run=10001-10001msec 00:36:17.486 00:36:17.486 real 0m13.572s 00:36:17.486 user 0m53.769s 00:36:17.486 sys 0m0.625s 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:36:17.486 ************************************ 00:36:17.486 END TEST bdev_fio_trim 00:36:17.486 ************************************ 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:36:17.486 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:36:17.486 00:36:17.486 real 0m27.455s 00:36:17.486 user 1m47.865s 00:36:17.486 sys 0m1.429s 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:17.486 ************************************ 00:36:17.486 END TEST bdev_fio 00:36:17.486 ************************************ 00:36:17.486 10:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:17.486 10:46:29 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:17.486 10:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:36:17.486 10:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:17.486 10:46:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:17.486 ************************************ 00:36:17.486 START TEST bdev_verify 00:36:17.486 ************************************ 00:36:17.486 10:46:29 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:17.486 [2024-07-26 10:46:30.010135] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:17.486 [2024-07-26 10:46:30.010194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599027 ] 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:17.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.486 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:17.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:17.487 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:17.487 [2024-07-26 10:46:30.145051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:17.487 [2024-07-26 10:46:30.190054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.487 [2024-07-26 10:46:30.190059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.487 [2024-07-26 10:46:30.211381] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:36:17.487 [2024-07-26 10:46:30.219410] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:17.487 [2024-07-26 10:46:30.227433] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:17.487 [2024-07-26 10:46:30.334093] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:36:20.022 [2024-07-26 10:46:32.659690] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:36:20.022 [2024-07-26 10:46:32.659769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:20.022 [2024-07-26 10:46:32.659783] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:20.022 [2024-07-26 10:46:32.667708] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:36:20.022 [2024-07-26 10:46:32.667726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:20.022 [2024-07-26 10:46:32.667737] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:20.022 [2024-07-26 10:46:32.675729] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:36:20.022 [2024-07-26 10:46:32.675747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:20.022 [2024-07-26 10:46:32.675757] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:20.022 [2024-07-26 10:46:32.683754] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:36:20.022 [2024-07-26 10:46:32.683770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:20.022 [2024-07-26 10:46:32.683781] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:20.022 Running I/O for 5 seconds... 00:36:25.302 00:36:25.302 Latency(us) 00:36:25.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.302 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x0 length 0x1000 00:36:25.302 crypto_ram : 5.06 1023.00 4.00 0.00 0.00 124226.15 7392.46 98566.14 00:36:25.302 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x1000 length 0x1000 00:36:25.302 crypto_ram : 5.06 1025.22 4.00 0.00 0.00 123957.86 16462.64 98566.14 00:36:25.302 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x0 length 0x1000 00:36:25.302 crypto_ram2 : 5.06 1030.28 4.02 0.00 0.00 123267.13 3486.52 83047.22 00:36:25.302 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x1000 length 0x1000 00:36:25.302 crypto_ram2 : 5.07 1035.24 4.04 0.00 0.00 122937.48 5609.88 83466.65 00:36:25.302 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x0 length 0x1000 00:36:25.302 crypto_ram3 : 5.05 3266.62 12.76 0.00 0.00 38798.01 2896.69 39007.03 00:36:25.302 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x1000 length 0x1000 00:36:25.302 crypto_ram3 : 5.05 3266.69 12.76 0.00 0.00 38771.26 6055.53 38797.31 00:36:25.302 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x0 length 0x1000 00:36:25.302 crypto_ram4 : 5.06 3264.75 12.75 0.00 0.00 38685.47 3434.09 35861.30 00:36:25.302 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:25.302 Verification LBA range: start 0x1000 length 0x1000 00:36:25.302 crypto_ram4 : 5.07 3284.11 12.83 0.00 0.00 38489.08 1068.24 35232.15 00:36:25.302 =================================================================================================================== 00:36:25.302 Total : 17195.91 67.17 0.00 0.00 59015.24 1068.24 98566.14 00:36:25.302 00:36:25.302 real 0m8.209s 00:36:25.302 user 0m15.540s 00:36:25.302 sys 0m0.473s 00:36:25.302 10:46:38 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:25.302 10:46:38 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:25.302 ************************************ 00:36:25.302 END TEST bdev_verify 00:36:25.302 ************************************ 00:36:25.561 10:46:38 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:25.561 10:46:38 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:36:25.561 10:46:38 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:25.561 10:46:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:25.561 ************************************ 00:36:25.561 START TEST bdev_verify_big_io 00:36:25.561 ************************************ 00:36:25.561 10:46:38 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:25.561 [2024-07-26 10:46:38.306243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:25.561 [2024-07-26 10:46:38.306300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600357 ] 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.561 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:25.561 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:25.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:25.562 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:25.562 [2024-07-26 10:46:38.440201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:25.821 [2024-07-26 10:46:38.484882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.821 [2024-07-26 10:46:38.484888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.821 [2024-07-26 10:46:38.506199] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:36:25.821 [2024-07-26 10:46:38.514226] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:25.821 [2024-07-26 10:46:38.522249] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:25.821 [2024-07-26 10:46:38.624762] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:36:28.354 [2024-07-26 10:46:40.951620] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:36:28.354 [2024-07-26 10:46:40.951690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:28.354 [2024-07-26 10:46:40.951704] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:28.355 [2024-07-26 10:46:40.959635] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:36:28.355 [2024-07-26 10:46:40.959653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:28.355 [2024-07-26 10:46:40.959664] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:28.355 [2024-07-26 10:46:40.967657] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:36:28.355 [2024-07-26 10:46:40.967674] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:28.355 [2024-07-26 10:46:40.967684] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:28.355 [2024-07-26 10:46:40.975679] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:36:28.355 [2024-07-26 10:46:40.975695] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:28.355 [2024-07-26 10:46:40.975706] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:28.355 Running I/O for 5 seconds... 00:36:29.295 [2024-07-26 10:46:41.867441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.868898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.869466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.870465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.870516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.870555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.870592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.871520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.873964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.874287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.874304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.295 [2024-07-26 10:46:41.875819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.876203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.876220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.877405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.877465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.877506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.877568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.878480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.879457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.879506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.879545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.879582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.880550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.881496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.881557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.881597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.881635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.882495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.883612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.883662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.883702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.883739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.884733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.885670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.885731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.885771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.885809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.886752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.887841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.887890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.887930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.887968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.888839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.889886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.889934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.889986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.890024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.890535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.890591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.890647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.890700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.891034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.891050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.891962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.296 [2024-07-26 10:46:41.892602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.893010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.893028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.894791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.895093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.895109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.896917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.897311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.897330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.898961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.899000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.899038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.899320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.899337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.900963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.901003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.901041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.901316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.901334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.902344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.902392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.902431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.902480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.903535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.904785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.904832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.904869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.904907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.905687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.906622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.906669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.906707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.906743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.907698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.909885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.910254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.910271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.297 [2024-07-26 10:46:41.911062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.911710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.912100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.912119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.913428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.913498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.913536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.913589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.914503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.915447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.915504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.915543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.915581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.916568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.917505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.917563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.917602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.917640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.918484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.919579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.919627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.919665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.919706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.920589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.921567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.921615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.921654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.921692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.922574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.923672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.923720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.923759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.923797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.924655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.925848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.925908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.925946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.926997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.927013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.927820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.927870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.927924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.927974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.928986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.298 [2024-07-26 10:46:41.929931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.929977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.930807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.931697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.931743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.931781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.931825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.932549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.933615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.933667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.933706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.933729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.933744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.934668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.936146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.937410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.938888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.940368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.941000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.941361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.942570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.943805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.944036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.944052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.945679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.946920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.948406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.949897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.950541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.950898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.952216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.953462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.953695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.953711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.955886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.957203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.958683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.960171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.960943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.961543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.962794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.964269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.964499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.964515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.966597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.968095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.969586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.970655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.971452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.972851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.974105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.975578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.975807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.975823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.977892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.979370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.980859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.981226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.982194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.983451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.984898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.986371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.986604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.986619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.988941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.990443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.991478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.299 [2024-07-26 10:46:41.991833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.993637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.994896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.996358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.997834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.998279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:41.998296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.000552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.002044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.002414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.002769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.004340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.005824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.007307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.008504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.008764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.008779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.011015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.012082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.012446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.012814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.014343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.015826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.017300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.017776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.018008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.018023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.020234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.020607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.020964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.021776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.023581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.025069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.026289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.027368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.027639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.027655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.029548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.029910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.030270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.031785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.033621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.035102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.035620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.037069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.037305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.037321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.038497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.038860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.039604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.040847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.042641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.043932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.300 [2024-07-26 10:46:42.044977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.046215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.046448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.046464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.047539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.047901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.049424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.050784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.052579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.053071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.054507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.056100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.056337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.056354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.057611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.058430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.059676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.061163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.062685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.063844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.065085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.066570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.066803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.066819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.068040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.069632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.071056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.072541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.073343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.074756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.076328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.077831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.078066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.078081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.079774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.081022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.082484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.083974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.085338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.086596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.088075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.089538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.089899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.089916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.092572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.093957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.095448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.096998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.098924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.100529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.102047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.103515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.103853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.103868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.105847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.107335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.108811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.109835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.111382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.112877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.114356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.115073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.115470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.115488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.117960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.119435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.120966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.121574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.123475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.124982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.126432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.126786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.127079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.127095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.129302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.130794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.131802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.133127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.134991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.136481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.137175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.137530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.137891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.137907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.140162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.141681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.142271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.143524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.301 [2024-07-26 10:46:42.145326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.146709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.147066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.147428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.147659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.147675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.149933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.150865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.152288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.153545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.155350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.155914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.156274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.156869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.157100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.157116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.159440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.160159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.161409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.162879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.164542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.164902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.165259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.166555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.166838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.166854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.168554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.169964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.171203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.172665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.173668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.174029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.174542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.175812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.176044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.176060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.177324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.178585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.180074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.181555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.182219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.182581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.183860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.185109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.185345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.185361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.187460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.188721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.190208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.191699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.192467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.302 [2024-07-26 10:46:42.193032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.194278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.195746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.195978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.195994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.198121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.199596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.201070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.202223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.203024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.204335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.205580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.207062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.207299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.207315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.209356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.210782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.212266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.212693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.213582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.214841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.216319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.217799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.218030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.218047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.220357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.221850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.223071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.223433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.225107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.226361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.227837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.229309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.229638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.229654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.231924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.233439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.233804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.234164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.235727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.237129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.238501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.239106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.239343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.239360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.240440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.240805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.241170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.241534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.242315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.242682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.243048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.243415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.243824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.243840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.245323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.245687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.246041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.246405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.565 [2024-07-26 10:46:42.247117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.247483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.247840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.248207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.248547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.248563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.249697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.250062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.250102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.250461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.250502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.250777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.251226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.251586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.251940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.252307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.252591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.252606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.253541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.253590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.253628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.253665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.253963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.254596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.256974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.257338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.257355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.258843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.259228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.259245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.260333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.260396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.260451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.260504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.260959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.261597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.262434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.262481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.262519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.262557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.262972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.263542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.264641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.264688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.264726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.264765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.265660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.266516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.266562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.266599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.266638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.566 [2024-07-26 10:46:42.267052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.267737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.269962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.270873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.270919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.270956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.271716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.272094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.272110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.273725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.274137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.274160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.275891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.276182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.276199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.277957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.278381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.278398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.279973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.280013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.280250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.280266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.281999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.282038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.282090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.282473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.282490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.283409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.283458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.283498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.567 [2024-07-26 10:46:42.283540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.283852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.283997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.284050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.284089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.284127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.284422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.284438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.285478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.285525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.285567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.285605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.285916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.286510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.287682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.287742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.287796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.287849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.288867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.289668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.289715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.289753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.289791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.290736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.291893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.291969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.292720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.293071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.293088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.294766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.295089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.295105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.296839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.297971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.298204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.298335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.568 [2024-07-26 10:46:42.298376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.298415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.298451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.298678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.298694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.299560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.299614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.299653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.299691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.300531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.301972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.302013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.302051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.302088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.302320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.302336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.303804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.304180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.304197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.304953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.305893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.306658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.306717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.306758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.306795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.307599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.308683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.308746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.308787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.308824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.309549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.310930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.311162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.311179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.311960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.569 [2024-07-26 10:46:42.312006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.312865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.313669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.313715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.313751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.313788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.314536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.315477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.316986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.317039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.317399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.317694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.317968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.318017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.318056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.318093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.318334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.318351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.320500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.321075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.322314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.323805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.324035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.325588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.325952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.326311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.327321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.327594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.327609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.329787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.330634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.331887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.333370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.333600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.335067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.335427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.335783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.337057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.337324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.337340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.339091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.340438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.341703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.343198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.343429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.344167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.344527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.345125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.346354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.346584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.346599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.347892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.349147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.350623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.352115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.352349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.352788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.353150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.570 [2024-07-26 10:46:42.354497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.355741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.355971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.355986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.358060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.359318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.360799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.362294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.362672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.363111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.363772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.365027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.366506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.366737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.366753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.368819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.370318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.371814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.372886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.373240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.373678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.375122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.376408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.377890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.378120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.378135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.380258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.381736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.383245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.383606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.383893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.384707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.385956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.387426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.388908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.389145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.389162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.391411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.392918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.393866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.394246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.394625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.396322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.397801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.399339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.400957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.401333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.401349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.403615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.405230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.405587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.405941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.406178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.407496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.408988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.410474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.411340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.411572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.411588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.413849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.414635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.414992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.415467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.415697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.417372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.418952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.420429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.421297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.421570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.421586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.423720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.424091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.424451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.425829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.426105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.427669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.429159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.429783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.431342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.431575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.571 [2024-07-26 10:46:42.431590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.432911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.433282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.433949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.435195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.435425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.437001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.438313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.439369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.440617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.440849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.440865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.441926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.442295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.443820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.445181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.445412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.446987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.447483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.448843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.450370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.450605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.450621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.451854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.452764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.454017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.455495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.455725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.456900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.458196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.459447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.460934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.461168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.461184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.572 [2024-07-26 10:46:42.462492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.464049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.465603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.467239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.467471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.468096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.469337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.470811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.472299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.472528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.472544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.474291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.475538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.477020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.478506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.478774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.480195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.481450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.482943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.484436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.484777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.484793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.487402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.489043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.490600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.492081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.492441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.493780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.495278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.496765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.497880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.498256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.498273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.500317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.501812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.503301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.503937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.504172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.505589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.507085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.508619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.508980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.509270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.509287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.511550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.513048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.514372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.515381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.515657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.517234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.518738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.519633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.520000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.520412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.520429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.522655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.524154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.524618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.525981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.526216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.527908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.529469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.529824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.530184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.530417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.530433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.532659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.533751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.535002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.536251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.536482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.538062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.835 [2024-07-26 10:46:42.538759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.539112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.539657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.539888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.539904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.542095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.542699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.543952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.545442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.545673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.547144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.547504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.547858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.549206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.549493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.549509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.551189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.552630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.553891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.555363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.555594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.556274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.556632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.557311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.558551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.558781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.558796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.560179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.561445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.562933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.564437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.564679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.565117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.565479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.567040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.568433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.568664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.568683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.571055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.572542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.574112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.575763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.576102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.576545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.577258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.578495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.579974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.580209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.580225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.582276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.583770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.585254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.586314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.586666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.587104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.588576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.589771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.590358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.590590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.590606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.591731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.592094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.592465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.592836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.593290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.593738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.594102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.594480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.594838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.595205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.595221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.596463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.596826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.597192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.597554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.597942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.598389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.598751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.599109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.599472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.599909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.599927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.601104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.601473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.601832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.602196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.602563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.602997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.603365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.603724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.604079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.604400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.836 [2024-07-26 10:46:42.604416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.605637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.606001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.606367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.606728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.607076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.607521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.607887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.608257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.608634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.608937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.608953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.610217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.610580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.610938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.611307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.611678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.612126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.612494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.612853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.613218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.613639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.613655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.615168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.615223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.615575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.615627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.616048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.616494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.616859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.617223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.617580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.617954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.617970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.618904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.618952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.618991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.619964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.621735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.622125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.622147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.623965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.624003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.624430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.624447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.625605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.625656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.625694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.625733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.626694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.627855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.627903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.627941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.627979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.628362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.628490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.628533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.628572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.628610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.629004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.629021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.630018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.630076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.630121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.837 [2024-07-26 10:46:42.630183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.630591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.630722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.630764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.630803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.630841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.631255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.631272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.632415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.632462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.632499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.632537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.632907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.633536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.634734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.634780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.634822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.634860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.635903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.636918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.636975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.637690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.638091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.638108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.639890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.640232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.640249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.641974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.642016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.642055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.642093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.642500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.642517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.643492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.643549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.643607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.643662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.644707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.645763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.645810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.645848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.645886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.646777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.647814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.647862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.647900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.647938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.648300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.648426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.648469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.838 [2024-07-26 10:46:42.648508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.648545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.648939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.648958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.649933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.649992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.650676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.651069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.651086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.652968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.653290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.653307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.654465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.654515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.654553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.654591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.655680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.656565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.656611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.656659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.656697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.657785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.660433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.660480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.660519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.660557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.660937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.661627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.664806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.665068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.665084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.666981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.667032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.667069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.667107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.667342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.839 [2024-07-26 10:46:42.667472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.667515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.667577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.667619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.667977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.667993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.669951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.670810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.672945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.672996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.673796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.675711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.675758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.675800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.675846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.676872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.678619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.678671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.678709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.678746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.678973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.679470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.681713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.681760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.681803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.681849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.682568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.684984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.685038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.685076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.685114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.685414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.685430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.687988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.688004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.840 [2024-07-26 10:46:42.690858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.690874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.692658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.692704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.692742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.692779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.693643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.695577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.695623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.695660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.695705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.695932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.696430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.697443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.698852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.698896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.699999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.700266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.700283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.702484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.703316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.704553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.706032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.706267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.707750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.708109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.708474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.709641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.709914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.709929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.711898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.712978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.714231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.715712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.715942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.717157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.717516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.717879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.719355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.719625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.719640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.721123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.722763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.724226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.725767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.726000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.726454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.726817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.727623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.728870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.729100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.729116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.730712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.731972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:29.841 [2024-07-26 10:46:42.733464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.734953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.735253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.735690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.736063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.737631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.739012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.739248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.739264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.741800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.743324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.744917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.746539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.746856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.747299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.748154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.749396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.750884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.751114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.751130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.753146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.754638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.756135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.756912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.757330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.757779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.759446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.760911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.762437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.762669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.762684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.765137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.103 [2024-07-26 10:46:42.766799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.768212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.768704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.769051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.769509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.770938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.772196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.773676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.773905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.773921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.776106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.777587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.779167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.779531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.779825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.780428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.782004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.783513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.785087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.785324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.785340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.787848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.789433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.790924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.791285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.791565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.792555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.793806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.795256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.796746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.797012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.797028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.799297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.800795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.801570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.801932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.802334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.804049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.805566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.807145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.808800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.809149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.809165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.811483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.812935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.813294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.813649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.813878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.815195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.816689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.818181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.818768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.818998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.819014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.821256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.821694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.822048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.822883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.823160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.824734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.826226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.827367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.828573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.828839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.828855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.830613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.830977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.831336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.832819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.833051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.834626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.836229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.836904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.838157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.838388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.838404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.839498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.839860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.841155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.842408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.842639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.844223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.844871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.846447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.847968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.848203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.848219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.849464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.850240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.851485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.852972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.104 [2024-07-26 10:46:42.853206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.854532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.855669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.856911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.858376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.858607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.858623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.859914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.861530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.863036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.864603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.864835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.865419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.866667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.868144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.869634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.869866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.869881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.871827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.873088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.874575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.876067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.876418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.877934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.879258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.880749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.882239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.882592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.882607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.884841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.886334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.887819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.889086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.889378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.890702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.892190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.893674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.894592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.895005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.895021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.897109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.898599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.900085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.900638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.900869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.902375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.903888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.905480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.905843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.906146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.906162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.908362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.909859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.911106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.912193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.912484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.914058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.915556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.916431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.916801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.917220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.917237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.919485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.920962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.921437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.922843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.923075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.924722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.926334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.926689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.927040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.927276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.927293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.929495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.930682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.931843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.933097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.933331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.934915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.935717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.936072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.936428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.936659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.936674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.938013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.939582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.940985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.942474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.942867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.943314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.943882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.945097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.105 [2024-07-26 10:46:42.945734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.945964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.945980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.947444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.947816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.948187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.948547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.948932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.950329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.951808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.952196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.953703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.953937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.953952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.955476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.955844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.956211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.956584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.956982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.957428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.957792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.958157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.958530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.958847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.958865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.960104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.960478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.960841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.961207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.961484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.961923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.962294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.962657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.963015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.963333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.963349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.964572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.964956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.965324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.965686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.966054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.966503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.966867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.967239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.967599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.967941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.967958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.969181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.969554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.969914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.970284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.970637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.971076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.971446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.971808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.972176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.972602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.972618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.974134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.974508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.974869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.975233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.975542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.975980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.976351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.976712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.977070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.977397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.977414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.978584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.978968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.979333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.979693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.980163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.980606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.980971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.981337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.981697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.982031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.982047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.983332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.983698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.984057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.984423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.984691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.985131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.985502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.985860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.986222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.986541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.986558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.106 [2024-07-26 10:46:42.988002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.988064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.988437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.988495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.988853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.989302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.989670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.990032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.990404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.990757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.990772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.991801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.991860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.991921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.991974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.992966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.993886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.993934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.993972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.994988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.995996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.996991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.997992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.998747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.999072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:42.999087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.000858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.001163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.001180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.107 [2024-07-26 10:46:43.002951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.003766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.003826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.003864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.003901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.004884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.005693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.005741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.005781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.005818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.006646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.007985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.008023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.008062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.008415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.008433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.009976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.010022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.010063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.010294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.010310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.011875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.012100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.012115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.012999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.013803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.014069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.014085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.014960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.015880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.016794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.016842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.016879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.016920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.370 [2024-07-26 10:46:43.017648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.018985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.019023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.019060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.019292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.019308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.020861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.021091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.021107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.021912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.021970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.022772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.023636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.023683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.023720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.023757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.024677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.025522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.025569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.025607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.025644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.025873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.026377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.027866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.028172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.028189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.029945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.030795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.030848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.030884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.030922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.031642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.032555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.032614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.371 [2024-07-26 10:46:43.032652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.032689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.032994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.033498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.034909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.035135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.035158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.035953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.035999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.036975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.037965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.038826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.039650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.039704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.039742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.039785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.040505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.041996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.042037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.042075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.042113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.042405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.042422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.043949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.044219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.044235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.045814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.046086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.046102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.046919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.046966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.047005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.047042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.047276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.047401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.372 [2024-07-26 10:46:43.047443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.047489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.047533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.047759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.047775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.048784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.050295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.050341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.052772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.053125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.053146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.055323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.056819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.057329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.058586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.058817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.060322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.061857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.062230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.062592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.062874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.062890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.065257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.066779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.067611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.068853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.069083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.070631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.072064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.072426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.072784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.073015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.073029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.075294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.076352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.077631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.078885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.079116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.080698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.081389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.081752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.082157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.082389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.082405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.084684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.085342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.086589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.088076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.088310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.089755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.090118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.090483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.091685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.091961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.091976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.093663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.095148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.096449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.097946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.098182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.098762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.099125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.099688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.100931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.101165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.101181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.102666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.103923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.105406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.106889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.107121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.107564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.107930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.109229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.110470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.110700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.110716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.113017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.114417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.115908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.117463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.117845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.118298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.118949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.120201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.121689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.121919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.373 [2024-07-26 10:46:43.121935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.124012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.125511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.127005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.127835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.128252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.128702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.130221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.131578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.133061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.133295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.133311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.135876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.137422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.138873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.139232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.139498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.140412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.141657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.143126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.144607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.144856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.144871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.147161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.148649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.149382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.149745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.150148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.151809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.153270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.154800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.156388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.156823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.156842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.159185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.160642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.161002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.161365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.161597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.162922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.164415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.165899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.166817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.167046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.167062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.169362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.170012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.170376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.170782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.171011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.172621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.174251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.175834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.176610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.176846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.176862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.178978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.179349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.179712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.180941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.181219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.182797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.184291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.185005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.186614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.186892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.186908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.188244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.188612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.189153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.190401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.190631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.192265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.374 [2024-07-26 10:46:43.193718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.194618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.195864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.196094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.196109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.197233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.197605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.198952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.200195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.200426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.202002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.202638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.204196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.205729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.205960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.205975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.207260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.207950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.209201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.210685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.210915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.212296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.213370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.214621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.216115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.216350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.216365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.217643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.219162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.220491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.221966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.222201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.222795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.224223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.225797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.227307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.227537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.227553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.229225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.230486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.231974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.233434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.233679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.234912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.236157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.237646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.239131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.239484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.239500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.242436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.243864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.245390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.246989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.247370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.248769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.250272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.251764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.253036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.253396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.253412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.255334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.256397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.257606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.258747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.259000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.259452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.259817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.260182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.260545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.260883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.260899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.263071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.263506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.264851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.265213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.265512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.266594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.267344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.375 [2024-07-26 10:46:43.268475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.269129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.269460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.269477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.270710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.271079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.271447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.272169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.272400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.273680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.274430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.275901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.276879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.277214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.277232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.278487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.278855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.279222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.279581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.279887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.280331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.280696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.281054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.281420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.281798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.281815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.283082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.283461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.283822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.284187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.284510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.284949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.285339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.285701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.286070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.286391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.286408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.287734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.288101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.288469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.288830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.289151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.289593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.289957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.290325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.290684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.291003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.291019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.292426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.292811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.293181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.293543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.293931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.294381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.294746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.296382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.296801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.297032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.297049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.298467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.299341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.300447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.301409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.301668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.302110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.302489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.303228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.304286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.304682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.304701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.305979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.306352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.306720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.307080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.307494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.307934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.308300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.308656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.309017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.309362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.309379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.310879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.311248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.311609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.311968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.312309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.312747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.313104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.313468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.635 [2024-07-26 10:46:43.313820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.314105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.314121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.315479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.315840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.316201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.316558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.316964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.317414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.317775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.318129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.318489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.318892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.318908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.320157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.320520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.320874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.321233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.321546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.321989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.322378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.322736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.323088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.323495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.323513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.324893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.324948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.325306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.325347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.325722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.326169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.326531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.326889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.327260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.327698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.327715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.328930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.328983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.329673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.330034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.330049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.331898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.332259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.332276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.333997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.334038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.334076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.334435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.334455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.335487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.335535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.335574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.335611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.335939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.336611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.337738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.337788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.337829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.337868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.338694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.636 [2024-07-26 10:46:43.339763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.339812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.339863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.339902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.340896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.341725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.341785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.341823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.341861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.342593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.343450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.343497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.343538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.343575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.343918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.344583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.345660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.345709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.345751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.345787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.346518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.347957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.348191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.348207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.349866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.350255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.350273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.351915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.352236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.352252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.353812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.354205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.354222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.355473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.355520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.355557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.355594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.637 [2024-07-26 10:46:43.355857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.355984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.356025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.356064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.356100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.356334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.356350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.357980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.358020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.358251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.358267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.361173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.361222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.361776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.361825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.361863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.638 [2024-07-26 10:46:43.362089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.539686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.548503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.548568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.549414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.549463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.550681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.550728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.552164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.552393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.552409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.552423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.560895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.562127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.563602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.563836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.563852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.566611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.567401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.568642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.570115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.571730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.572815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.574068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.575544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.575774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.575789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.578391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.579309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.580546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.582022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.583511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.584711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.585951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.587428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.587657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.587673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.590385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.591629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.592863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.594330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.595492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.597037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.598413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.599883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.600118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.600133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.602609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.603810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.605036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.606500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.607729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.609198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.899 [2024-07-26 10:46:43.610485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.611958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.612193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.612210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.613653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.614016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.614376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.615991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.617784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.619285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.619785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.621062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.621297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.621313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.622486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.622848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.623207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.623560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.625385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.626658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.628127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.628589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.628819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.628838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.631285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.631652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.631698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.632459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.633310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.634500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.634863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.635226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.635583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.635599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.638357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.638410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.638762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.638803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.639666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.639727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.641324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.641383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.641611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.641626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.643973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.644025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.644384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.644426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.645248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.645303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.645660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.645706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.646051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.646071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.650095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.650160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.650519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.650564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.651223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.651275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.651627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.651668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.652023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.652039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.654818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.654873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.655234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.655275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.656875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.659567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.659620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.660589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.660635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.661760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.661814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.663123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.663174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.663403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.663419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.666307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.666369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.666730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.666789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.667565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.667616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.667968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.668008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.900 [2024-07-26 10:46:43.668432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.668449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.671137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.671194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.671545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.671586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.672346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.672405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.672765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.672810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.673225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.673243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.675972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.676027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.676385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.676427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.677178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.677230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.677586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.677637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.677991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.678008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.680898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.680958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.681324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.681371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.682137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.682197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.682552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.682592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.682984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.683001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.685619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.685671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.686023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.686063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.686824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.686879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.687250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.687293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.687699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.687715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.690638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.690689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.691042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.691082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.691921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.691983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.692354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.692411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.692746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.692763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.695672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.695737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.696088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.696131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.696963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.697014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.697373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.697414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.697765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.697781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.700642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.700704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.701062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.701114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.701912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.701963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.702321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.702362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.702763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.702781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.705422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.705474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.705829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.705888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.706760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.706813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.707172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.707213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.707579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.707596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.710273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.710334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.710690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.710730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.711511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.711570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.711925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.711970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.712409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.901 [2024-07-26 10:46:43.712427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.715126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.715185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.715536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.715575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.716405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.716457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.716817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.716862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.717224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.717241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.720189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.720250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.720603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.720644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.721467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.721519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.721871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.721911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.722275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.722292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.725105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.725179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.725546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.725600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.726354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.726407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.726759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.726800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.727208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.727226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.729803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.729856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.730215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.730256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.731604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.731658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.732068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.732112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.732347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.732364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.736444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.736509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.738001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.738045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.739818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.739870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.741214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.741257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.741520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.741536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.743793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.743845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.744204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.744249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.745790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.745842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.747304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.747347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.747576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.747592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.750364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.750417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.750782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.750823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.751634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.751684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.752040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.752090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.752327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.752343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.754898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.754965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.755325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.755366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.756162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.756213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.757385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.757429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.757731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.757747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.760857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.760921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.760959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.760994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.762704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.762753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.762791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.763148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.763533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.763550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.765540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.902 [2024-07-26 10:46:43.765586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.765623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.765661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.766457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.768703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.768750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.768787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.768824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.769688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.771929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.772164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.772181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.774938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:30.903 [2024-07-26 10:46:43.775170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.832180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.836375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.836426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.836470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.837420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.840430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.840484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.841955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.841996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.842339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.842389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.843877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.847278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.164 [2024-07-26 10:46:43.847333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.848631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.848673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.849019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.850260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.850305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.851787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.852018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.852034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.854551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.854604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.855866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.855909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.856253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.857742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.857787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.858892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.859124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.859146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.860913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.860964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.861323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.861365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.861904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.862368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.862413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.863647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.863876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.863892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.866988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.867044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.868514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.868558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.868899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.869267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.869310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.869659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.870074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.870090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.873187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.873239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.873792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.873834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.874184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.875661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.875705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.877171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.877429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.877445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.880930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.880982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.882261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.882304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.882643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.884182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.884236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.885028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.885278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.885295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.887208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.887259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.887618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.887658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.888181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.889801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.889852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.891518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.891750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.891766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.894837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.894908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.896352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.896396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.896867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.897236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.897279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.897633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.898034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.898050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.900219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.900277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.901708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.901754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.902104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.903602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.903647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.904814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.905166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.905183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.908513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.908566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.165 [2024-07-26 10:46:43.910054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.910096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.910437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.910958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.911000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.912311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.912541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.912557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.914720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.914770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.915121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.915168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.915514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.916777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.916822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.918297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.918529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.918544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.921565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.921617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.922077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.922117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.922635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.922996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.923038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.923394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.923627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.923643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.926149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.926201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.927442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.927488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.927824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.929301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.929347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.929704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.930110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.930127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.933623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.933675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.935157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.935193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.935542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.936826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.936871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.938127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.938360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.938377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.940449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.940815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.941674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.941718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.942964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.943198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.943475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.945077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.945868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.945912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.946186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.946203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.948014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.948385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.948744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.949870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.950186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.951691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.953176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.954310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.955510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.955777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.955792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.957700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.958063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.958423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.959847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.960117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.961613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.963113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.963931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.965457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.965729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.965745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.967765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.968127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.968583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.969911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.970145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.971883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.973404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.974217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.975448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.975677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.166 [2024-07-26 10:46:43.975697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.977888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.978256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.979515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.979559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.979835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.981406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.982892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.983446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.984931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.985169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.985185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.987283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.987333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.987684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.988450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.988686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.990134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.990191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.991307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.991350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.991616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.991631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.993835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.994213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.994575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.994628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.994939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.995382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.995743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.995790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.996147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.996456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.996471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.998439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.998800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.998844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.999201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:43.999532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.000859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.000935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.001688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.002706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.003026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.003042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.005600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.005666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.006026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.006388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.006662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.006829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.008309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.009209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.009255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.009554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.009569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.011414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.011786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.012155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.012202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.012574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.013000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.014468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.014523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.014883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.015114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.015129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.017341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.018768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.018824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.019189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.019545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.019966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.020016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.020377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.020733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.021028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.021044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.024047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.024101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.025308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.025664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.026079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.026258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.026627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.026989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.027034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.027419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.027435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.029574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.029944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.030319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.167 [2024-07-26 10:46:44.030373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.030754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.031183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.031545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.031597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.031953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.032311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.032328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.034560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.034927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.034981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.035345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.035708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.036130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.036182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.036539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.036896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.037246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.037263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.039465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.039517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.039871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.040236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.040572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.040741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.041102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.041467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.041514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.041837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.041853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.043934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.044318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.044678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.044722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.045079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.045512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.045873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.045917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.046279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.046663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.046678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.049052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.049106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.049464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.049504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.049775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.050206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.050264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.050623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.050978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.051243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.051260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.053653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.053709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.054083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.054127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.054428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.054575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.054937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.055303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.055351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.055693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.055709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.057912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.057963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.058328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.058375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.058667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.168 [2024-07-26 10:46:44.059911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.065774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.066235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.066278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.066543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.070998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.071052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.071408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.071450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.071770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.077013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.077073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.078444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.078486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.078841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.084732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.084786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.086010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.086057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.086294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.088890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.088941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.089853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.089896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.090184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.095407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.095461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.096940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.096982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.097342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.103656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.103716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.104811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.104855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.105125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.110590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.110649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.111002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.111041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.111331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.115007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.115061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.115923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.115965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.116201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.120828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.120881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.122147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.122191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.122424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.125871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.125923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.126922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.126964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.127367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.131596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.429 [2024-07-26 10:46:44.131649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.132564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.132610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.132840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.137639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.137694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.139161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.139205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.139432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.143320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.143373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.143410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.144037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.144271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.145829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.145878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.145924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.145961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.146401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.147735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.147787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.147824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.147861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.148092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.149338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.149385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.149422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.149459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.149686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.152253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.152302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.152340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.152378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.152698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.154380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.155861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.155907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.156638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.156902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.159756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.159806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.160786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.160830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.161135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.163800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.165288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.165333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.165370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.165653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.168876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.168938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.168976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.169910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.170307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.172239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.172287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.173508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.173552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.173780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.176342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.177910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.177956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.177993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.178287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.180271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.180323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.180362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.181006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.181241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.184844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.184891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.186380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.186424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.186650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.188474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.189854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.189898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.189942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.190416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.194754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.194814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.194853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.195797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.196027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.199444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.199492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.199846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.199888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.200178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.203674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.205163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.430 [2024-07-26 10:46:44.205207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.205244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.205573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.211251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.211307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.211348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.211700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.212078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.216155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.216988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.217030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.217074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.217308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.225406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.226995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.227040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.228514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.228744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.233026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.233458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.233502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.234697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.235122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.241891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.242698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.242746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.244159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.244600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.246618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.248042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.248087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.249719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.249950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.255151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.255514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.255555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.256487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.256727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.263085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.264337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.264382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.265868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.266097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.271925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.273455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.273524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.274990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.275305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.281903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.282276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.282321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.283628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.283858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.291193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.291558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.291605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.292916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.293343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.300111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.300884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.300931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.302339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.302755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.308220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.309474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.309518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.310994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.311227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.317121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.318627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.318674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.319490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.319720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.325182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.325546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.325587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.327102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.431 [2024-07-26 10:46:44.327384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.335645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.336705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.337062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.337106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.337376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.347262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.348607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.349045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.349405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.349635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.358600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.360108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.360469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.360996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.361230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.378011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.379502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.379860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.690 [2024-07-26 10:46:44.379905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.765033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.766600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.766960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.767318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.776087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.777296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.777336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.777375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.949 [2024-07-26 10:46:44.777730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.778117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.781996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.783266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.783783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.783824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.784147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.790655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.790712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.791980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.792510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.792930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.795223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.795606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.795961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.796014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.796256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.803327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.803709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.803753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.804109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.804470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.810153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.810209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.811667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.812111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.812538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.817130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.818099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.818868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.818915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.819206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.822784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.823165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.823225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.823586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.823938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.832098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.832160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.832971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.834312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.834659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.838046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.838683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.839039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.839080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.839377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.842420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.842797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.842844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.843213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.843595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.847923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.847978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.848340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.848699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:31.950 [2024-07-26 10:46:44.848992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.245 [2024-07-26 10:46:44.854677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.855063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.855549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.855595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.855824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.859284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.859649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.859694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.860667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.860917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.864102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.864171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.865658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.865699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.866069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.868627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.869002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.869049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.869100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.869422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.875034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.875090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.875450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.875491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.875774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.879657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.879711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.880739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.880781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.881027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.889541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.889596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.890409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.890451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.890710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.895679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.895733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.896851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.896907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.897134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.904619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.904673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.905340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.905401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.905806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.911282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.911337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.912696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.912739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.912967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.920152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.920207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.921741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.921792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.922021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.925245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.925301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.926754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.926797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.927211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.932525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.932587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.933064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.933108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.933345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.938666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.938721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.940192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.940235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.940561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.947825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.947881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.949105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.949154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.949382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.953515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.953575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.955046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.955090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.955324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.960518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.960574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.961088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.961132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.961366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.965871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.967133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.967183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.967220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.967448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.246 [2024-07-26 10:46:44.971027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.971083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.971123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.971168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.971512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.976372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.976420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.976457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.976503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.976896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.978097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.979436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.980752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.980795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.981063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.984592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.984641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.984683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.985696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.985983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.987572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.988787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.988832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.988869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.989316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.996775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.996830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.996871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.997907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:44.998137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.004153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.004204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.005167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.005211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.005497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.010823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.012314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.012361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.012405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.012726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.018419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.018472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.018509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.019498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.019728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.027367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.027424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.028594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.028636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.028919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.030815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.032122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.032170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.032208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.032448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.037339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.037394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.037432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.038941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.039214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.041872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.041921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.042286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.042331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.042558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.048465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.049961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.050006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.050043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.050391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.055194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.055248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.055285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.056761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.057000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.061183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.061233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.062861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.062910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.063320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.070544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.071987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.072032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.073509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.073740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.078265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.078313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.079867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.081520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.081850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.087678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.088452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.247 [2024-07-26 10:46:45.088496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.089749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.090022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.093489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.094985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.095031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.096507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.096833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.101272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.102767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.102812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.103471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.103702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.107100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.108170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.108225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.108683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.108922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.115585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.117085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.117131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.117493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.117799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.123189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.124666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.124710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.126165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.126405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.132374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.133865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.133911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.134373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.134603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.139437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.140998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.141042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.142519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.248 [2024-07-26 10:46:45.142749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.148683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.150067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.150112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.150472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.150703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.157201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.158695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.158740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.159466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.159701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.164728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.165575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.165620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.166854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.167084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.172315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.173813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.173858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.174479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.174711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.180004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.181320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.181365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.181964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.182201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.188314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.188370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.189663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.190802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.191097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.197042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.198620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.200270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.200636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.200866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.207195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.208532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.208969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.210223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.210569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.217769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.217839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.217878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.218265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.218622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.224070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.225669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.226170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.226216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.226445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.231955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.232008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.233281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.233801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.234032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.238556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.238925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.240058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.240104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.240439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.246108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.246481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.246527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.246892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.247217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.252589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.252648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.253004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.254489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.254859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.260922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.261759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.262120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.262173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.262489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.266209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.267688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.267735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.268865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.269155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.274637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.274693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.275046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.275419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.275743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.280295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.280668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.281032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.281077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.509 [2024-07-26 10:46:45.281451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.285641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.286012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.286057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.286416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.286832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.290117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.290184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.290541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.290900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.291178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.293332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.293718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.294084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.294129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.294436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.298296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.299164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.299211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.299947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.300350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.305264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.305320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.305673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.305713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.306106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.311253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.312344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.312411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.312449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.312677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.318871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.318925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.319657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.319701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.319929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.324061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.324115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.324834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.324876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.325106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.330760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.330819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.332004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.332045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.332314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.337314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.337369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.338844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.338886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.339115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.344265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.344320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.345196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.345242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.345668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.350441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.350495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.351736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.351779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.352006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.358945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.359000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.359771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.359814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.360121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.363450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.363511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.364966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.365008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.365243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.370981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.371035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.372506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.372548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.372829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.377452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.377512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.377866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.377905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.378134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.384352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.384413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.385981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.386043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.386277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.391456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.391509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.391866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.391925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.392366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.510 [2024-07-26 10:46:45.400031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.400084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.401323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.401366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.401733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.404560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.404929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.404995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.405046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.405280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.408496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.408544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.408586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.408630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.511 [2024-07-26 10:46:45.408857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.412879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.412929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.412965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.413003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.413381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.417625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.418696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.419055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.771 [2024-07-26 10:46:45.419098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.419540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.423890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.423939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.423976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.425233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.425462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.432678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.433806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.433850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.433887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.434157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.436391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.436444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.436485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.437459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.437701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.443474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.443523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.443879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.443929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.444162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.447046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.448300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.448346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.448383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.448610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.451308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.451362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.451400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.452685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.452918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.457040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.457098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.457461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.457503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.457793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.460493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.461061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.461104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.461149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.461377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.464164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.464216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.464254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.465812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.466043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.469088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.469144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.470624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.470673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.470966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.473450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.474945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.474992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.475039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.475272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.479742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.479798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.479837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.480213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.480444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.483806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.483857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.485296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.485343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.485571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.488169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.489662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.489706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.491184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.491479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.496586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.496637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.498163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.499752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.499983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.507563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.507928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.507970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.509157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.509440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.516058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.516908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.516964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.517323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.517716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.522012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.523427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.523473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.524937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.772 [2024-07-26 10:46:45.525173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.527852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.529346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.529391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.530855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.531132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.535373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.535741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.535787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.536381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.536611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.540056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.541543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.541588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.543061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.543299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.546911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.548415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.548460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.548958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.549196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.551312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.552912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.552968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.553707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.553940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.555929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.557227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.557276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.558654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.559057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.562213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.562583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.562625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.562974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.563213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.566028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.566404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.566458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.566812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.567182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.571528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.573019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.573064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.574318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.574626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.576656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.577027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.577080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.578489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.578783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.583152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.583208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.583562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.583931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.584210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.590413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.591889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.592452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.592810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.593144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.597950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.598319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.598674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.600240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.600609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.603565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.603622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.603660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.604712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.604943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.608845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.609217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.609582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.609627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.609986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.614621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.614673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.615877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.616448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.616679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.619082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.619460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.619987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.620030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.620264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.624611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.624992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.625039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.625402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.625763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.628345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.773 [2024-07-26 10:46:45.628401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.628758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.629112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.629446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.631792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.632164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.632532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.632572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.632936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.635275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.635641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.635686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.636040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.636346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.639279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.639334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.640407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.640765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.641213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.644484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.645345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.645716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.645759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.646087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.650485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.651195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.651243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.651593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.651937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.658313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.658367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.659654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.660007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.660371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.663438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.665041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.665413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.665459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.665737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.671209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.672488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:32.774 [2024-07-26 10:46:45.672533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.674034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.674269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.679290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.679343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.680142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.680185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.680412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.683733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.684987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.685039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.685079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.685314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.689672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.689726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.690977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.691019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.691253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.696420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.696474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.696863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.696905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.697205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.701248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.701302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.702046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.702093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.702517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.708023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.708077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.709339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.709383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.709611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.713227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.713281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.714749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.714793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.715021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.719597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.719648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.720005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.720044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.720326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.723525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.723578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.723932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.723974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.724209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.729194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.729258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.730723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.730765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.731071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.735190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.735245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.736473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.736515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.736742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.741768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.741821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.742596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.742638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.742864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.746950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.747004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.748244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.748286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.748513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.754214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.754273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.035 [2024-07-26 10:46:45.754627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.754673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.755056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.759594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.759647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.760155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.760201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.760510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.763305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.764523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.764568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.764620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.764849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.766101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.766155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.766195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.766233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.766576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.770430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.770482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.770519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.770556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.770802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.772247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.773800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.775209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.775253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.775483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.779061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.779110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.779153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.780069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.780409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.783308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.784908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.784954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.784991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.785226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.788758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.788811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.788854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.789209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.789501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.792206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.792255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.793120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.793181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.793587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.796648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.798108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.798156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.798195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.798426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.801463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.801523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.801561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.801913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.802148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.805097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.805151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.806422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.806468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.806705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.809702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.811189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.811234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.811271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.811500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.814994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.815045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.815098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.815455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.815815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.819734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.819783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.821237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.821280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.821509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.826277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.827814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.827859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.827896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.828124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.832668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.832729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.832767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.833491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.833723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.837154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.837201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.838675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.036 [2024-07-26 10:46:45.838717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.838944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.841847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.843343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.843388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.844871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.845232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.849647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.849696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.850052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.850973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.851269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.854803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.856293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.856338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.857819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.858147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.862907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.864425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.864470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.865469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.865742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.868475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.870007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.870060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.871608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.871839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.875993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.876380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.876429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.876783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.877062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.880438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.882012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.882065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.882421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.882718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.885752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.886116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.886166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.886516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.886787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.888733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.890212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.890256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.891089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.891365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.893269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.893636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.893681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.894046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.894477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.897417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.897786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.897833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.899227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.899523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.901444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.901811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.901855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.902214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.902547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.905564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.906558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.906606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.906956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.907340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.909335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.910106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.910154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.911235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.911548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.914882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.915260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.915309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.915660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.915955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.920367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.920422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.921893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.922708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.923120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.926490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.927531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.928264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.928621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.929003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.932357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.932728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.933087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.933452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.037 [2024-07-26 10:46:45.933868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.939607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.939667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.939705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.940941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.941259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.944856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.945975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.946638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.946684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.946956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.950860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.950914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.951432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.952689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.953027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.955742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.957197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.958723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.958777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.959156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.961367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.961737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.961783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.962136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.962465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.965110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.965169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.965521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.965877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.966149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.968562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.968931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.969304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.969351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.969720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.972021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.972478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.972526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.973661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.974023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.979359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.979415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.979782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.980219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.980449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.985022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.985400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.985763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.985808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.986162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.991312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.992787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.992832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.994298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:45.994675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.000387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.000442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.001892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.002595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.002826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.006344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.301 [2024-07-26 10:46:46.007826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.008593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.008637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.008875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.012345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.012708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.012752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.013104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.013340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.016541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.016596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.016951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.016995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.017235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.022127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.023627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.023673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.023718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.023946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.028833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.028887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.029882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.029928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.030160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.033711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.033762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.034868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.034915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.035148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.039405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.039457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.041034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.041088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.041321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.045874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.045927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.047008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.047050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.047464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.052179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.052232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.053353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.053397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.053749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.058321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.058375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.059633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.059677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.059905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.062927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.062981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.064452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.064496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.064723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.069127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.069184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.069535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.069575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.069804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.073462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.073515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.073874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.073918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.074293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.079709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.079762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.081248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.081289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.081517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.085270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.085326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.085928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.085970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.086204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.090792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.090846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.092320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.092364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.092810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.097340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.097395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.098635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.098678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.098941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.101126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.102754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.102807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.102844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.103071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.104640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.104688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.104725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.302 [2024-07-26 10:46:46.104776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.105010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.109727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.109779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.109816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.109866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.110276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.112673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.113646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.114889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.114932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.115164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.120200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.120252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.120290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.121515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.121745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.124411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.125897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.125942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.125979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.126212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.130223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.130275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.130317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.130668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.131016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.135296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.135343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.136596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.136640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.136868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.138680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.139741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.139788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.139826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.140053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.143053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.143118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.143162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.143946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.144189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.147881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.147930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.149391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.149435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.149662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.151271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.152498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.152543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.152581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.152808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.156671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.156725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.156764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.157117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.157487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.161721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.161770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.163248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.163291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.163519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.164987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.166328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.166373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.166415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.166642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.171198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.171251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.171289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.172311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.172701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.176853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.176905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.177581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.177625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.177859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.180405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.181977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.182028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.183594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.183825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.187991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.188055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.189493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.189846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.190225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.194899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.196161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.196206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.197672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.303 [2024-07-26 10:46:46.197899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.203448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.205080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.205132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.206661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.206997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.210800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.211945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.211992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.213086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.213412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.564 [2024-07-26 10:46:46.216539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.216906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.216947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.217319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.217588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.220388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.221778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.221823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.222185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.222415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.224633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.224997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.225052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.225415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.225670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.229254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.229618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.229659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.230013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.230363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.232773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.233478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.233526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.234410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.234822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.238153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.238524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.238569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.238924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.239320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.242796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.243175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.243225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.244708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.244937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.247212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.247574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.247615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.247971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.248205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.250372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.251688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.251743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.252454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.252685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.254934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.255762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.255808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.256596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.256866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.261897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.261956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.262331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.262696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.263073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.269529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.270111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.271316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.271959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.272193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.274836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.275209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.276800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.278442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.278807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.282243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.282299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.282337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.283571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.283909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.287280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.288711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.289076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.289120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.289353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.291959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.292012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.292773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.294251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.294485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.297552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.297923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.298291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.298355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.298785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.301038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.301419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.301467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.301820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.302078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.305032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.565 [2024-07-26 10:46:46.305087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.305469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.305827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.306057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.308565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.310053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.311299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.311342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.311570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.314775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.316252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.316297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.317941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.318283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.323125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.323183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.324389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.324981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.325215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.328399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.329893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.331368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.331414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.331745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.336145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.337467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.337512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.338824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.339054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.344962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.345016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.345380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.345739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.345969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.348961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.349340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.349831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.349875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.350103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.355407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.356906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.356951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.357587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.357991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.363160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.363226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.364692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.364743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.365087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.369986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.371616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.371669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.371713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.371944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.375039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.375092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.376547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.376590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.376817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.381304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.381359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.381713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.381760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.381988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.385672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.385725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.386082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.386125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.386524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.392124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.392181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.393659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.393702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.393929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.397600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.397655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.398344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.398387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.398649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.402934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.402988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.404458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.404501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.404921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.409467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.409521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.410767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.410810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.411080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.415877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.415934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.417258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.417300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.566 [2024-07-26 10:46:46.417602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.422294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.422347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.423813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.423854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.424199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.430333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.430386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.431632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.431675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.431937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.435506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.435560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.436775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.436818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.437096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.441075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.441128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.442364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.442407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.442635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.448424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.448483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.448839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.448885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.449191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.451752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.452721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.452767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.452805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.453030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.454200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.454249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.454287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.454325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.454701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.458848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.458927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.458971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.459009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.459241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.460582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.462150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.463538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.463581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.567 [2024-07-26 10:46:46.463808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.467840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.467893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.467930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.469124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.469470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.471905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.472374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.472419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.472457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.472713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.475561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.475615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.475656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.477271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.477503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.481680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.481737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.482090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.482130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.482450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.485129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.485921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.485965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.486002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.486289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.488996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.489050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.489088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.489947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.490184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.494178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.494251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:33.836 [2024-07-26 10:46:46.497367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:36:34.098 00:36:34.098 Latency(us) 00:36:34.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.098 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x0 length 0x100 00:36:34.098 crypto_ram : 5.79 44.73 2.80 0.00 0.00 2762787.64 32505.86 2603823.92 00:36:34.098 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x100 length 0x100 00:36:34.098 crypto_ram : 5.80 44.17 2.76 0.00 0.00 2801119.64 48863.64 2711198.11 00:36:34.098 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x0 length 0x100 00:36:34.098 crypto_ram2 : 5.79 45.23 2.83 0.00 0.00 2641773.94 3053.98 2603823.92 00:36:34.098 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x100 length 0x100 00:36:34.098 crypto_ram2 : 5.80 44.67 2.79 0.00 0.00 2679298.12 21495.81 2711198.11 00:36:34.098 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x0 length 0x100 00:36:34.098 crypto_ram3 : 5.58 312.84 19.55 0.00 0.00 366042.03 57461.96 560359.01 00:36:34.098 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x100 length 0x100 00:36:34.098 crypto_ram3 : 5.55 301.29 18.83 0.00 0.00 380802.14 47815.07 570425.34 00:36:34.098 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x0 length 0x100 00:36:34.098 crypto_ram4 : 5.71 329.04 20.56 0.00 0.00 337672.59 31876.71 422785.84 00:36:34.098 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:34.098 Verification LBA range: start 0x100 length 0x100 00:36:34.098 crypto_ram4 : 5.69 317.71 19.86 0.00 0.00 349359.12 22963.81 404330.91 00:36:34.098 =================================================================================================================== 00:36:34.098 Total : 1439.69 89.98 0.00 0.00 658434.84 3053.98 2711198.11 00:36:34.356 00:36:34.356 real 0m8.948s 00:36:34.356 user 0m16.955s 00:36:34.356 sys 0m0.530s 00:36:34.356 10:46:47 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:34.356 10:46:47 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:34.356 ************************************ 00:36:34.356 END TEST bdev_verify_big_io 00:36:34.356 ************************************ 00:36:34.356 10:46:47 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:34.356 10:46:47 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:34.357 10:46:47 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:34.357 10:46:47 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:34.615 ************************************ 00:36:34.615 START TEST bdev_write_zeroes 00:36:34.615 ************************************ 00:36:34.615 10:46:47 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:34.615 [2024-07-26 10:46:47.341447] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:34.615 [2024-07-26 10:46:47.341502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601938 ] 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:34.615 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:34.615 [2024-07-26 10:46:47.472240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.615 [2024-07-26 10:46:47.515389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.874 [2024-07-26 10:46:47.536617] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:36:34.874 [2024-07-26 10:46:47.544643] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:36:34.874 [2024-07-26 10:46:47.552662] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:36:34.874 [2024-07-26 10:46:47.656079] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:36:37.405 [2024-07-26 10:46:49.982110] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:36:37.405 [2024-07-26 10:46:49.982178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:37.405 [2024-07-26 10:46:49.982194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:37.405 [2024-07-26 10:46:49.990128] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:36:37.405 [2024-07-26 10:46:49.990153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:37.405 [2024-07-26 10:46:49.990165] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:37.405 [2024-07-26 10:46:49.998156] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:36:37.405 [2024-07-26 10:46:49.998173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:36:37.405 [2024-07-26 10:46:49.998184] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:37.405 [2024-07-26 10:46:50.006177] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:36:37.405 [2024-07-26 10:46:50.006194] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:36:37.405 [2024-07-26 10:46:50.006204] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:37.405 Running I/O for 1 seconds... 00:36:38.341 00:36:38.341 Latency(us) 00:36:38.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.341 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:38.341 crypto_ram : 1.02 2141.93 8.37 0.00 0.00 59316.00 5059.38 71303.17 00:36:38.341 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:38.341 crypto_ram2 : 1.02 2155.66 8.42 0.00 0.00 58695.05 5033.16 66270.00 00:36:38.341 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:38.341 crypto_ram3 : 1.02 16500.84 64.46 0.00 0.00 7642.35 2267.55 9961.47 00:36:38.341 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:38.341 crypto_ram4 : 1.02 16545.65 64.63 0.00 0.00 7598.47 1808.79 7969.18 00:36:38.341 =================================================================================================================== 00:36:38.341 Total : 37344.08 145.88 0.00 0.00 13550.12 1808.79 71303.17 00:36:38.600 00:36:38.600 real 0m4.142s 00:36:38.600 user 0m3.635s 00:36:38.600 sys 0m0.468s 00:36:38.600 10:46:51 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:38.600 10:46:51 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:38.600 ************************************ 00:36:38.600 END TEST bdev_write_zeroes 00:36:38.600 ************************************ 00:36:38.600 10:46:51 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:38.600 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:38.600 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.600 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:38.859 ************************************ 00:36:38.859 START TEST bdev_json_nonenclosed 00:36:38.859 ************************************ 00:36:38.860 10:46:51 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:38.860 [2024-07-26 10:46:51.567839] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:38.860 [2024-07-26 10:46:51.567895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602502 ] 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:38.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:38.860 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:38.860 [2024-07-26 10:46:51.700361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.860 [2024-07-26 10:46:51.743948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.860 [2024-07-26 10:46:51.744013] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:38.860 [2024-07-26 10:46:51.744031] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:38.860 [2024-07-26 10:46:51.744042] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:39.119 00:36:39.119 real 0m0.309s 00:36:39.119 user 0m0.169s 00:36:39.119 sys 0m0.137s 00:36:39.119 10:46:51 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:39.119 10:46:51 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:39.119 ************************************ 00:36:39.119 END TEST bdev_json_nonenclosed 00:36:39.119 ************************************ 00:36:39.119 10:46:51 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:39.119 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:36:39.119 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:39.119 10:46:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:39.119 ************************************ 00:36:39.119 START TEST bdev_json_nonarray 00:36:39.119 ************************************ 00:36:39.119 10:46:51 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:39.119 [2024-07-26 10:46:51.962521] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:39.119 [2024-07-26 10:46:51.962576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602683 ] 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:39.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.378 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:39.378 [2024-07-26 10:46:52.095176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.378 [2024-07-26 10:46:52.138618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.378 [2024-07-26 10:46:52.138689] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:39.378 [2024-07-26 10:46:52.138705] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:39.378 [2024-07-26 10:46:52.138716] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:39.378 00:36:39.378 real 0m0.306s 00:36:39.378 user 0m0.162s 00:36:39.378 sys 0m0.143s 00:36:39.378 10:46:52 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:39.378 10:46:52 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:39.378 ************************************ 00:36:39.378 END TEST bdev_json_nonarray 00:36:39.378 ************************************ 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:36:39.378 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:36:39.379 10:46:52 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:36:39.379 00:36:39.379 real 1m11.579s 00:36:39.379 user 2m54.946s 00:36:39.379 sys 0m9.865s 00:36:39.379 10:46:52 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:39.379 10:46:52 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:36:39.379 ************************************ 00:36:39.379 END TEST blockdev_crypto_aesni 00:36:39.379 ************************************ 00:36:39.637 10:46:52 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:36:39.637 10:46:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:39.637 10:46:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:39.637 10:46:52 -- common/autotest_common.sh@10 -- # set +x 00:36:39.637 ************************************ 00:36:39.637 START TEST blockdev_crypto_sw 00:36:39.637 ************************************ 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:36:39.637 * Looking for test storage... 00:36:39.637 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3602832 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:36:39.637 10:46:52 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 3602832 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 3602832 ']' 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:39.637 10:46:52 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:39.896 [2024-07-26 10:46:52.539979] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:39.896 [2024-07-26 10:46:52.540043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602832 ] 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:39.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:39.896 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:39.896 [2024-07-26 10:46:52.673029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.896 [2024-07-26 10:46:52.717833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.828 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:40.828 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:36:40.828 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:36:40.828 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:36:40.828 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:36:40.828 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.828 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:40.828 Malloc0 00:36:40.828 Malloc1 00:36:40.828 true 00:36:40.828 true 00:36:40.828 true 00:36:40.828 [2024-07-26 10:46:53.698482] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:36:40.828 crypto_ram 00:36:40.828 [2024-07-26 10:46:53.706506] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:36:40.828 crypto_ram2 00:36:40.828 [2024-07-26 10:46:53.714530] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:36:40.828 crypto_ram3 00:36:40.828 [ 00:36:40.828 { 00:36:40.828 "name": "Malloc1", 00:36:40.828 "aliases": [ 00:36:40.828 "40e8399a-1fd9-42f1-92ab-054e357801b3" 00:36:40.828 ], 00:36:40.828 "product_name": "Malloc disk", 00:36:40.828 "block_size": 4096, 00:36:40.828 "num_blocks": 4096, 00:36:40.828 "uuid": "40e8399a-1fd9-42f1-92ab-054e357801b3", 00:36:40.828 "assigned_rate_limits": { 00:36:40.828 "rw_ios_per_sec": 0, 00:36:40.828 "rw_mbytes_per_sec": 0, 00:36:40.828 "r_mbytes_per_sec": 0, 00:36:41.086 "w_mbytes_per_sec": 0 00:36:41.086 }, 00:36:41.086 "claimed": true, 00:36:41.086 "claim_type": "exclusive_write", 00:36:41.086 "zoned": false, 00:36:41.086 "supported_io_types": { 00:36:41.086 "read": true, 00:36:41.086 "write": true, 00:36:41.086 "unmap": true, 00:36:41.086 "flush": true, 00:36:41.086 "reset": true, 00:36:41.086 "nvme_admin": false, 00:36:41.086 "nvme_io": false, 00:36:41.086 "nvme_io_md": false, 00:36:41.086 "write_zeroes": true, 00:36:41.086 "zcopy": true, 00:36:41.086 "get_zone_info": false, 00:36:41.086 "zone_management": false, 00:36:41.086 "zone_append": false, 00:36:41.086 "compare": false, 00:36:41.086 "compare_and_write": false, 00:36:41.086 "abort": true, 00:36:41.086 "seek_hole": false, 00:36:41.086 "seek_data": false, 00:36:41.086 "copy": true, 00:36:41.086 "nvme_iov_md": false 00:36:41.086 }, 00:36:41.086 "memory_domains": [ 00:36:41.086 { 00:36:41.086 "dma_device_id": "system", 00:36:41.086 "dma_device_type": 1 00:36:41.086 }, 00:36:41.086 { 00:36:41.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:41.086 "dma_device_type": 2 00:36:41.086 } 00:36:41.086 ], 00:36:41.086 "driver_specific": {} 00:36:41.086 } 00:36:41.086 ] 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f2103fd0-78da-500d-9d57-b28858028792"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f2103fd0-78da-500d-9d57-b28858028792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:36:41.086 10:46:53 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 3602832 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 3602832 ']' 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 3602832 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:41.086 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3602832 00:36:41.345 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:41.345 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:41.345 10:46:53 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3602832' 00:36:41.345 killing process with pid 3602832 00:36:41.345 10:46:54 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 3602832 00:36:41.345 10:46:54 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 3602832 00:36:41.603 10:46:54 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:41.603 10:46:54 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:36:41.603 10:46:54 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:36:41.603 10:46:54 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:41.603 10:46:54 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:41.603 ************************************ 00:36:41.603 START TEST bdev_hello_world 00:36:41.603 ************************************ 00:36:41.603 10:46:54 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:36:41.603 [2024-07-26 10:46:54.423345] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:41.603 [2024-07-26 10:46:54.423400] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603119 ] 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:41.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.603 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:41.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:41.604 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:41.862 [2024-07-26 10:46:54.555849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.862 [2024-07-26 10:46:54.598778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.862 [2024-07-26 10:46:54.757967] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:36:41.862 [2024-07-26 10:46:54.758032] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:41.862 [2024-07-26 10:46:54.758046] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:42.120 [2024-07-26 10:46:54.765985] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:36:42.120 [2024-07-26 10:46:54.766003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:42.120 [2024-07-26 10:46:54.766014] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:42.120 [2024-07-26 10:46:54.774007] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:36:42.120 [2024-07-26 10:46:54.774024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:36:42.120 [2024-07-26 10:46:54.774035] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:42.120 [2024-07-26 10:46:54.813992] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:36:42.120 [2024-07-26 10:46:54.814026] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:36:42.120 [2024-07-26 10:46:54.814042] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:36:42.120 [2024-07-26 10:46:54.815302] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:36:42.120 [2024-07-26 10:46:54.815375] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:36:42.120 [2024-07-26 10:46:54.815390] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:36:42.120 [2024-07-26 10:46:54.815421] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:36:42.120 00:36:42.120 [2024-07-26 10:46:54.815438] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:36:42.120 00:36:42.120 real 0m0.626s 00:36:42.120 user 0m0.392s 00:36:42.120 sys 0m0.220s 00:36:42.120 10:46:54 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:42.120 10:46:54 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:36:42.120 ************************************ 00:36:42.120 END TEST bdev_hello_world 00:36:42.120 ************************************ 00:36:42.379 10:46:55 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:36:42.379 10:46:55 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:42.379 10:46:55 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:42.379 10:46:55 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:42.379 ************************************ 00:36:42.379 START TEST bdev_bounds 00:36:42.379 ************************************ 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3603267 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3603267' 00:36:42.379 Process bdevio pid: 3603267 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3603267 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3603267 ']' 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:42.379 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:36:42.379 [2024-07-26 10:46:55.130225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:42.379 [2024-07-26 10:46:55.130284] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603267 ] 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.379 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:42.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:42.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:42.380 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:42.380 [2024-07-26 10:46:55.265845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:42.638 [2024-07-26 10:46:55.311958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.638 [2024-07-26 10:46:55.312052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.638 [2024-07-26 10:46:55.312056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.638 [2024-07-26 10:46:55.465307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:36:42.638 [2024-07-26 10:46:55.465367] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:42.638 [2024-07-26 10:46:55.465381] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:42.638 [2024-07-26 10:46:55.473327] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:36:42.638 [2024-07-26 10:46:55.473345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:42.638 [2024-07-26 10:46:55.473356] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:42.638 [2024-07-26 10:46:55.481349] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:36:42.638 [2024-07-26 10:46:55.481365] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:36:42.638 [2024-07-26 10:46:55.481376] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:43.204 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.204 10:46:55 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:36:43.204 10:46:55 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:36:43.204 I/O targets: 00:36:43.204 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:36:43.204 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:36:43.204 00:36:43.204 00:36:43.204 CUnit - A unit testing framework for C - Version 2.1-3 00:36:43.204 http://cunit.sourceforge.net/ 00:36:43.204 00:36:43.204 00:36:43.204 Suite: bdevio tests on: crypto_ram3 00:36:43.204 Test: blockdev write read block ...passed 00:36:43.204 Test: blockdev write zeroes read block ...passed 00:36:43.204 Test: blockdev write zeroes read no split ...passed 00:36:43.204 Test: blockdev write zeroes read split ...passed 00:36:43.204 Test: blockdev write zeroes read split partial ...passed 00:36:43.204 Test: blockdev reset ...passed 00:36:43.204 Test: blockdev write read 8 blocks ...passed 00:36:43.204 Test: blockdev write read size > 128k ...passed 00:36:43.204 Test: blockdev write read invalid size ...passed 00:36:43.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:43.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:43.204 Test: blockdev write read max offset ...passed 00:36:43.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:43.204 Test: blockdev writev readv 8 blocks ...passed 00:36:43.204 Test: blockdev writev readv 30 x 1block ...passed 00:36:43.204 Test: blockdev writev readv block ...passed 00:36:43.204 Test: blockdev writev readv size > 128k ...passed 00:36:43.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:43.204 Test: blockdev comparev and writev ...passed 00:36:43.204 Test: blockdev nvme passthru rw ...passed 00:36:43.204 Test: blockdev nvme passthru vendor specific ...passed 00:36:43.204 Test: blockdev nvme admin passthru ...passed 00:36:43.204 Test: blockdev copy ...passed 00:36:43.204 Suite: bdevio tests on: crypto_ram 00:36:43.204 Test: blockdev write read block ...passed 00:36:43.204 Test: blockdev write zeroes read block ...passed 00:36:43.204 Test: blockdev write zeroes read no split ...passed 00:36:43.204 Test: blockdev write zeroes read split ...passed 00:36:43.204 Test: blockdev write zeroes read split partial ...passed 00:36:43.204 Test: blockdev reset ...passed 00:36:43.204 Test: blockdev write read 8 blocks ...passed 00:36:43.204 Test: blockdev write read size > 128k ...passed 00:36:43.204 Test: blockdev write read invalid size ...passed 00:36:43.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:43.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:43.204 Test: blockdev write read max offset ...passed 00:36:43.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:43.204 Test: blockdev writev readv 8 blocks ...passed 00:36:43.204 Test: blockdev writev readv 30 x 1block ...passed 00:36:43.204 Test: blockdev writev readv block ...passed 00:36:43.204 Test: blockdev writev readv size > 128k ...passed 00:36:43.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:43.204 Test: blockdev comparev and writev ...passed 00:36:43.204 Test: blockdev nvme passthru rw ...passed 00:36:43.204 Test: blockdev nvme passthru vendor specific ...passed 00:36:43.204 Test: blockdev nvme admin passthru ...passed 00:36:43.204 Test: blockdev copy ...passed 00:36:43.205 00:36:43.205 Run Summary: Type Total Ran Passed Failed Inactive 00:36:43.205 suites 2 2 n/a 0 0 00:36:43.205 tests 46 46 46 0 0 00:36:43.205 asserts 260 260 260 0 n/a 00:36:43.205 00:36:43.205 Elapsed time = 0.077 seconds 00:36:43.205 0 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3603267 ']' 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603267' 00:36:43.463 killing process with pid 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3603267 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:36:43.463 00:36:43.463 real 0m1.285s 00:36:43.463 user 0m3.357s 00:36:43.463 sys 0m0.358s 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:43.463 10:46:56 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:36:43.463 ************************************ 00:36:43.463 END TEST bdev_bounds 00:36:43.463 ************************************ 00:36:43.722 10:46:56 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:36:43.722 10:46:56 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:36:43.722 10:46:56 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:43.722 10:46:56 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:43.722 ************************************ 00:36:43.722 START TEST bdev_nbd 00:36:43.722 ************************************ 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3603450 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3603450 /var/tmp/spdk-nbd.sock 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3603450 ']' 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:36:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.722 10:46:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:43.722 [2024-07-26 10:46:56.516928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:36:43.722 [2024-07-26 10:46:56.516986] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.722 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:43.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:43.723 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:43.980 [2024-07-26 10:46:56.650846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.980 [2024-07-26 10:46:56.695779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.980 [2024-07-26 10:46:56.861203] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:36:43.980 [2024-07-26 10:46:56.861260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:36:43.980 [2024-07-26 10:46:56.861273] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:43.980 [2024-07-26 10:46:56.869221] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:36:43.980 [2024-07-26 10:46:56.869238] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:36:43.980 [2024-07-26 10:46:56.869249] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:43.980 [2024-07-26 10:46:56.877241] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:36:43.980 [2024-07-26 10:46:56.877258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:36:43.980 [2024-07-26 10:46:56.877268] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:36:44.544 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:36:44.545 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:36:44.545 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:36:44.545 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:36:44.545 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:44.802 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:44.803 1+0 records in 00:36:44.803 1+0 records out 00:36:44.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244947 s, 16.7 MB/s 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:36:44.803 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:45.060 1+0 records in 00:36:45.060 1+0 records out 00:36:45.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342854 s, 11.9 MB/s 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:36:45.060 10:46:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:45.318 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:36:45.318 { 00:36:45.318 "nbd_device": "/dev/nbd0", 00:36:45.318 "bdev_name": "crypto_ram" 00:36:45.318 }, 00:36:45.318 { 00:36:45.318 "nbd_device": "/dev/nbd1", 00:36:45.318 "bdev_name": "crypto_ram3" 00:36:45.318 } 00:36:45.318 ]' 00:36:45.318 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:36:45.318 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:36:45.318 { 00:36:45.318 "nbd_device": "/dev/nbd0", 00:36:45.318 "bdev_name": "crypto_ram" 00:36:45.318 }, 00:36:45.318 { 00:36:45.318 "nbd_device": "/dev/nbd1", 00:36:45.318 "bdev_name": "crypto_ram3" 00:36:45.318 } 00:36:45.318 ]' 00:36:45.318 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:45.575 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:45.576 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:45.833 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:46.096 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:36:46.369 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:46.369 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:46.369 10:46:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:36:46.369 /dev/nbd0 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:46.369 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:46.370 1+0 records in 00:36:46.370 1+0 records out 00:36:46.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186106 s, 22.0 MB/s 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:46.370 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:36:46.640 /dev/nbd1 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:46.640 1+0 records in 00:36:46.640 1+0 records out 00:36:46.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323964 s, 12.6 MB/s 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:46.640 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:46.897 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:36:46.897 { 00:36:46.897 "nbd_device": "/dev/nbd0", 00:36:46.897 "bdev_name": "crypto_ram" 00:36:46.897 }, 00:36:46.897 { 00:36:46.897 "nbd_device": "/dev/nbd1", 00:36:46.897 "bdev_name": "crypto_ram3" 00:36:46.897 } 00:36:46.897 ]' 00:36:46.897 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:36:46.897 { 00:36:46.897 "nbd_device": "/dev/nbd0", 00:36:46.897 "bdev_name": "crypto_ram" 00:36:46.897 }, 00:36:46.897 { 00:36:46.897 "nbd_device": "/dev/nbd1", 00:36:46.897 "bdev_name": "crypto_ram3" 00:36:46.897 } 00:36:46.897 ]' 00:36:46.897 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:46.897 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:36:46.897 /dev/nbd1' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:36:47.155 /dev/nbd1' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:36:47.155 256+0 records in 00:36:47.155 256+0 records out 00:36:47.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110013 s, 95.3 MB/s 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:36:47.155 256+0 records in 00:36:47.155 256+0 records out 00:36:47.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278164 s, 37.7 MB/s 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:36:47.155 256+0 records in 00:36:47.155 256+0 records out 00:36:47.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332101 s, 31.6 MB/s 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:47.155 10:46:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:47.412 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:47.669 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:36:47.926 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:36:48.183 malloc_lvol_verify 00:36:48.183 10:47:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:36:48.440 cf39db65-9b35-42ec-ab9c-3d5e761633e8 00:36:48.440 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:36:48.697 c125c91b-88c5-49bd-942a-6cfedb7a615c 00:36:48.697 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:36:48.697 /dev/nbd0 00:36:48.697 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:36:48.955 mke2fs 1.46.5 (30-Dec-2021) 00:36:48.955 Discarding device blocks: 0/4096 done 00:36:48.955 Creating filesystem with 4096 1k blocks and 1024 inodes 00:36:48.955 00:36:48.955 Allocating group tables: 0/1 done 00:36:48.955 Writing inode tables: 0/1 done 00:36:48.955 Creating journal (1024 blocks): done 00:36:48.955 Writing superblocks and filesystem accounting information: 0/1 done 00:36:48.955 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3603450 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3603450 ']' 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3603450 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:36:48.955 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603450 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603450' 00:36:49.213 killing process with pid 3603450 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3603450 00:36:49.213 10:47:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3603450 00:36:49.213 10:47:02 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:36:49.213 00:36:49.213 real 0m5.649s 00:36:49.213 user 0m8.039s 00:36:49.213 sys 0m2.272s 00:36:49.213 10:47:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:49.213 10:47:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:49.213 ************************************ 00:36:49.213 END TEST bdev_nbd 00:36:49.213 ************************************ 00:36:49.495 10:47:02 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:36:49.495 10:47:02 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:36:49.495 10:47:02 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:36:49.495 10:47:02 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:36:49.495 10:47:02 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:49.495 10:47:02 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.495 10:47:02 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:36:49.495 ************************************ 00:36:49.495 START TEST bdev_fio 00:36:49.495 ************************************ 00:36:49.495 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:36:49.495 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:36:49.495 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:36:49.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:49.496 ************************************ 00:36:49.496 START TEST bdev_fio_rw_verify 00:36:49.496 ************************************ 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:49.496 10:47:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:36:50.143 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:50.143 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:50.143 fio-3.35 00:36:50.143 Starting 2 threads 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:50.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:50.143 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:02.334 00:37:02.334 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3604831: Fri Jul 26 10:47:13 2024 00:37:02.334 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(913MiB/10001msec) 00:37:02.334 slat (usec): min=13, max=1575, avg=18.84, stdev= 4.73 00:37:02.334 clat (usec): min=7, max=1832, avg=136.62, stdev=54.63 00:37:02.334 lat (usec): min=25, max=1854, avg=155.47, stdev=56.09 00:37:02.334 clat percentiles (usec): 00:37:02.334 | 50.000th=[ 135], 99.000th=[ 260], 99.900th=[ 281], 99.990th=[ 322], 00:37:02.334 | 99.999th=[ 611] 00:37:02.334 write: IOPS=28.0k, BW=110MiB/s (115MB/s)(1039MiB/9485msec); 0 zone resets 00:37:02.334 slat (usec): min=13, max=243, avg=31.59, stdev= 4.15 00:37:02.334 clat (usec): min=23, max=874, avg=182.91, stdev=83.53 00:37:02.334 lat (usec): min=48, max=964, avg=214.50, stdev=85.03 00:37:02.334 clat percentiles (usec): 00:37:02.334 | 50.000th=[ 178], 99.000th=[ 363], 99.900th=[ 383], 99.990th=[ 611], 00:37:02.334 | 99.999th=[ 816] 00:37:02.334 bw ( KiB/s): min=99520, max=112920, per=95.06%, avg=106600.42, stdev=2289.24, samples=38 00:37:02.334 iops : min=24880, max=28230, avg=26650.11, stdev=572.31, samples=38 00:37:02.334 lat (usec) : 10=0.01%, 20=0.01%, 50=5.33%, 100=17.88%, 250=63.79% 00:37:02.334 lat (usec) : 500=12.98%, 750=0.02%, 1000=0.01% 00:37:02.334 lat (msec) : 2=0.01% 00:37:02.334 cpu : usr=99.64%, sys=0.00%, ctx=25, majf=0, minf=465 00:37:02.334 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.334 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.334 issued rwts: total=233684,265922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.334 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:02.334 00:37:02.334 Run status group 0 (all jobs): 00:37:02.334 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=913MiB (957MB), run=10001-10001msec 00:37:02.334 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1039MiB (1089MB), run=9485-9485msec 00:37:02.334 00:37:02.334 real 0m11.128s 00:37:02.334 user 0m31.904s 00:37:02.334 sys 0m0.373s 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:37:02.334 ************************************ 00:37:02.334 END TEST bdev_fio_rw_verify 00:37:02.334 ************************************ 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f2103fd0-78da-500d-9d57-b28858028792"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f2103fd0-78da-500d-9d57-b28858028792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:37:02.334 crypto_ram3 ]] 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f2103fd0-78da-500d-9d57-b28858028792"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f2103fd0-78da-500d-9d57-b28858028792",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "b118ebfb-5c5d-5d06-9df8-17d53f6ebc4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:37:02.334 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:37:02.335 ************************************ 00:37:02.335 START TEST bdev_fio_trim 00:37:02.335 ************************************ 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:02.335 10:47:13 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:02.335 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:02.335 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:02.335 fio-3.35 00:37:02.335 Starting 2 threads 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:02.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:02.335 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:12.301 00:37:12.301 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3606762: Fri Jul 26 10:47:24 2024 00:37:12.301 write: IOPS=42.1k, BW=165MiB/s (173MB/s)(1646MiB/10001msec); 0 zone resets 00:37:12.301 slat (usec): min=13, max=1542, avg=20.63, stdev= 4.61 00:37:12.301 clat (usec): min=35, max=1691, avg=156.61, stdev=86.78 00:37:12.301 lat (usec): min=48, max=1710, avg=177.23, stdev=89.83 00:37:12.301 clat percentiles (usec): 00:37:12.301 | 50.000th=[ 124], 99.000th=[ 322], 99.900th=[ 343], 99.990th=[ 635], 00:37:12.301 | 99.999th=[ 742] 00:37:12.301 bw ( KiB/s): min=164408, max=169544, per=100.00%, avg=168619.37, stdev=536.70, samples=38 00:37:12.301 iops : min=41102, max=42386, avg=42154.84, stdev=134.18, samples=38 00:37:12.301 trim: IOPS=42.1k, BW=165MiB/s (173MB/s)(1646MiB/10001msec); 0 zone resets 00:37:12.301 slat (usec): min=5, max=319, avg= 9.34, stdev= 2.37 00:37:12.301 clat (usec): min=41, max=1710, avg=104.58, stdev=31.44 00:37:12.301 lat (usec): min=48, max=1717, avg=113.92, stdev=31.63 00:37:12.301 clat percentiles (usec): 00:37:12.301 | 50.000th=[ 105], 99.000th=[ 169], 99.900th=[ 180], 99.990th=[ 343], 00:37:12.301 | 99.999th=[ 449] 00:37:12.301 bw ( KiB/s): min=164440, max=169552, per=100.00%, avg=168621.05, stdev=534.13, samples=38 00:37:12.301 iops : min=41110, max=42388, avg=42155.26, stdev=133.53, samples=38 00:37:12.301 lat (usec) : 50=4.02%, 100=36.93%, 250=48.33%, 500=10.71%, 750=0.01% 00:37:12.301 lat (usec) : 1000=0.01% 00:37:12.301 lat (msec) : 2=0.01% 00:37:12.301 cpu : usr=99.64%, sys=0.00%, ctx=24, majf=0, minf=350 00:37:12.301 IO depths : 1=7.5%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:12.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.301 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.301 issued rwts: total=0,421330,421331,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:12.301 00:37:12.301 Run status group 0 (all jobs): 00:37:12.301 WRITE: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1646MiB (1726MB), run=10001-10001msec 00:37:12.301 TRIM: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1646MiB (1726MB), run=10001-10001msec 00:37:12.301 00:37:12.301 real 0m11.147s 00:37:12.301 user 0m31.309s 00:37:12.301 sys 0m0.417s 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:37:12.301 ************************************ 00:37:12.301 END TEST bdev_fio_trim 00:37:12.301 ************************************ 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:37:12.301 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:37:12.301 00:37:12.301 real 0m22.651s 00:37:12.301 user 1m3.409s 00:37:12.301 sys 0m0.992s 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:37:12.301 ************************************ 00:37:12.301 END TEST bdev_fio 00:37:12.301 ************************************ 00:37:12.301 10:47:24 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:12.301 10:47:24 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:12.301 10:47:24 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:37:12.301 10:47:24 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:12.301 10:47:24 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:12.301 ************************************ 00:37:12.301 START TEST bdev_verify 00:37:12.301 ************************************ 00:37:12.301 10:47:24 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:37:12.301 [2024-07-26 10:47:24.985931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:12.302 [2024-07-26 10:47:24.985995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608592 ] 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:12.302 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:12.302 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:12.302 [2024-07-26 10:47:25.120158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:12.302 [2024-07-26 10:47:25.164984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.302 [2024-07-26 10:47:25.164990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.560 [2024-07-26 10:47:25.326310] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:37:12.560 [2024-07-26 10:47:25.326375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:12.560 [2024-07-26 10:47:25.326388] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:12.560 [2024-07-26 10:47:25.334331] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:37:12.560 [2024-07-26 10:47:25.334348] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:12.560 [2024-07-26 10:47:25.334359] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:12.560 [2024-07-26 10:47:25.342355] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:37:12.560 [2024-07-26 10:47:25.342371] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:37:12.560 [2024-07-26 10:47:25.342382] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:12.560 Running I/O for 5 seconds... 00:37:17.855 00:37:17.855 Latency(us) 00:37:17.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.855 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.855 Verification LBA range: start 0x0 length 0x800 00:37:17.855 crypto_ram : 5.02 5742.19 22.43 0.00 0.00 22197.45 2136.47 27682.41 00:37:17.855 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:17.855 Verification LBA range: start 0x800 length 0x800 00:37:17.855 crypto_ram : 5.02 5742.51 22.43 0.00 0.00 22195.82 2188.90 27682.41 00:37:17.855 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:17.855 Verification LBA range: start 0x0 length 0x800 00:37:17.855 crypto_ram3 : 5.02 2878.48 11.24 0.00 0.00 44223.03 2398.62 31876.71 00:37:17.855 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:17.855 Verification LBA range: start 0x800 length 0x800 00:37:17.855 crypto_ram3 : 5.02 2878.65 11.24 0.00 0.00 44216.57 2424.83 31876.71 00:37:17.855 =================================================================================================================== 00:37:17.855 Total : 17241.83 67.35 0.00 0.00 29559.41 2136.47 31876.71 00:37:17.855 00:37:17.855 real 0m5.694s 00:37:17.855 user 0m10.767s 00:37:17.855 sys 0m0.241s 00:37:17.855 10:47:30 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:17.855 10:47:30 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:37:17.855 ************************************ 00:37:17.855 END TEST bdev_verify 00:37:17.855 ************************************ 00:37:17.855 10:47:30 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:17.855 10:47:30 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:37:17.855 10:47:30 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:17.855 10:47:30 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:17.855 ************************************ 00:37:17.855 START TEST bdev_verify_big_io 00:37:17.855 ************************************ 00:37:17.855 10:47:30 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:18.113 [2024-07-26 10:47:30.766127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:18.113 [2024-07-26 10:47:30.766189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609434 ] 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:18.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:18.113 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:18.113 [2024-07-26 10:47:30.897697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:18.113 [2024-07-26 10:47:30.942485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.113 [2024-07-26 10:47:30.942491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.371 [2024-07-26 10:47:31.096965] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:37:18.371 [2024-07-26 10:47:31.097021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:18.371 [2024-07-26 10:47:31.097034] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:18.371 [2024-07-26 10:47:31.104987] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:37:18.371 [2024-07-26 10:47:31.105004] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:18.371 [2024-07-26 10:47:31.105015] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:18.371 [2024-07-26 10:47:31.113010] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:37:18.371 [2024-07-26 10:47:31.113027] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:37:18.371 [2024-07-26 10:47:31.113037] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:18.371 Running I/O for 5 seconds... 00:37:23.631 00:37:23.631 Latency(us) 00:37:23.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.631 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:23.631 Verification LBA range: start 0x0 length 0x80 00:37:23.631 crypto_ram : 5.04 432.05 27.00 0.00 0.00 289330.27 6212.81 374131.92 00:37:23.631 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:23.631 Verification LBA range: start 0x80 length 0x80 00:37:23.631 crypto_ram : 5.04 431.78 26.99 0.00 0.00 289500.77 5819.60 375809.64 00:37:23.631 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:23.631 Verification LBA range: start 0x0 length 0x80 00:37:23.631 crypto_ram3 : 5.24 244.29 15.27 0.00 0.00 493328.16 5505.02 387553.69 00:37:23.631 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:23.631 Verification LBA range: start 0x80 length 0x80 00:37:23.631 crypto_ram3 : 5.24 244.15 15.26 0.00 0.00 493447.97 5242.88 387553.69 00:37:23.631 =================================================================================================================== 00:37:23.631 Total : 1352.27 84.52 0.00 0.00 364960.91 5242.88 387553.69 00:37:23.888 00:37:23.888 real 0m5.907s 00:37:23.888 user 0m11.225s 00:37:23.888 sys 0m0.219s 00:37:23.888 10:47:36 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:23.888 10:47:36 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:37:23.888 ************************************ 00:37:23.888 END TEST bdev_verify_big_io 00:37:23.888 ************************************ 00:37:23.888 10:47:36 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:23.888 10:47:36 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:23.888 10:47:36 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:23.888 10:47:36 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:23.888 ************************************ 00:37:23.888 START TEST bdev_write_zeroes 00:37:23.888 ************************************ 00:37:23.888 10:47:36 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:23.888 [2024-07-26 10:47:36.761281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:23.888 [2024-07-26 10:47:36.761334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610500 ] 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:24.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:24.147 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:24.147 [2024-07-26 10:47:36.892512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.148 [2024-07-26 10:47:36.935489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.406 [2024-07-26 10:47:37.096030] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:37:24.406 [2024-07-26 10:47:37.096083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:24.406 [2024-07-26 10:47:37.096096] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:24.406 [2024-07-26 10:47:37.104047] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:37:24.406 [2024-07-26 10:47:37.104065] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:24.406 [2024-07-26 10:47:37.104075] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:24.406 [2024-07-26 10:47:37.112069] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:37:24.406 [2024-07-26 10:47:37.112086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:37:24.406 [2024-07-26 10:47:37.112096] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:24.406 Running I/O for 1 seconds... 00:37:25.339 00:37:25.339 Latency(us) 00:37:25.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.339 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:25.339 crypto_ram : 1.01 28696.08 112.09 0.00 0.00 4449.72 1199.31 6212.81 00:37:25.339 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:25.339 crypto_ram3 : 1.01 14321.33 55.94 0.00 0.00 8875.48 5505.02 9279.90 00:37:25.339 =================================================================================================================== 00:37:25.339 Total : 43017.41 168.04 0.00 0.00 5924.97 1199.31 9279.90 00:37:25.596 00:37:25.596 real 0m1.644s 00:37:25.596 user 0m1.405s 00:37:25.596 sys 0m0.222s 00:37:25.596 10:47:38 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:25.596 10:47:38 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:37:25.596 ************************************ 00:37:25.596 END TEST bdev_write_zeroes 00:37:25.596 ************************************ 00:37:25.596 10:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:25.596 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:25.596 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:25.596 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:25.596 ************************************ 00:37:25.596 START TEST bdev_json_nonenclosed 00:37:25.596 ************************************ 00:37:25.596 10:47:38 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:25.854 [2024-07-26 10:47:38.499687] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:25.854 [2024-07-26 10:47:38.499746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610776 ] 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:25.854 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:25.854 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:25.854 [2024-07-26 10:47:38.631968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.854 [2024-07-26 10:47:38.675762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.854 [2024-07-26 10:47:38.675828] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:37:25.854 [2024-07-26 10:47:38.675843] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:25.854 [2024-07-26 10:47:38.675854] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:25.854 00:37:25.854 real 0m0.310s 00:37:25.854 user 0m0.155s 00:37:25.854 sys 0m0.153s 00:37:25.854 10:47:38 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:25.854 10:47:38 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:37:25.854 ************************************ 00:37:25.854 END TEST bdev_json_nonenclosed 00:37:25.854 ************************************ 00:37:26.112 10:47:38 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:26.112 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:37:26.112 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:26.112 10:47:38 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:26.112 ************************************ 00:37:26.112 START TEST bdev_json_nonarray 00:37:26.112 ************************************ 00:37:26.112 10:47:38 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:26.112 [2024-07-26 10:47:38.896905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:26.112 [2024-07-26 10:47:38.896963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610806 ] 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:26.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.112 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:26.371 [2024-07-26 10:47:39.034630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.371 [2024-07-26 10:47:39.076403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.371 [2024-07-26 10:47:39.076478] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:37:26.371 [2024-07-26 10:47:39.076494] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:26.371 [2024-07-26 10:47:39.076506] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:26.371 00:37:26.371 real 0m0.315s 00:37:26.371 user 0m0.149s 00:37:26.371 sys 0m0.163s 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:37:26.371 ************************************ 00:37:26.371 END TEST bdev_json_nonarray 00:37:26.371 ************************************ 00:37:26.371 10:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:37:26.371 10:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:37:26.371 10:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:37:26.371 10:47:39 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:37:26.371 10:47:39 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:26.371 10:47:39 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:26.371 10:47:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:26.371 ************************************ 00:37:26.371 START TEST bdev_crypto_enomem 00:37:26.371 ************************************ 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=3610835 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 3610835 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 3610835 ']' 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:26.371 10:47:39 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:26.630 [2024-07-26 10:47:39.301011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:26.630 [2024-07-26 10:47:39.301072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610835 ] 00:37:26.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.630 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:26.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.630 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:26.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:26.631 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:26.631 [2024-07-26 10:47:39.425543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.631 [2024-07-26 10:47:39.469058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:27.568 true 00:37:27.568 base0 00:37:27.568 true 00:37:27.568 [2024-07-26 10:47:40.174921] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:37:27.568 crypt0 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:27.568 [ 00:37:27.568 { 00:37:27.568 "name": "crypt0", 00:37:27.568 "aliases": [ 00:37:27.568 "c062d592-4e61-5c2f-86ef-20aa8e1714e4" 00:37:27.568 ], 00:37:27.568 "product_name": "crypto", 00:37:27.568 "block_size": 512, 00:37:27.568 "num_blocks": 2097152, 00:37:27.568 "uuid": "c062d592-4e61-5c2f-86ef-20aa8e1714e4", 00:37:27.568 "assigned_rate_limits": { 00:37:27.568 "rw_ios_per_sec": 0, 00:37:27.568 "rw_mbytes_per_sec": 0, 00:37:27.568 "r_mbytes_per_sec": 0, 00:37:27.568 "w_mbytes_per_sec": 0 00:37:27.568 }, 00:37:27.568 "claimed": false, 00:37:27.568 "zoned": false, 00:37:27.568 "supported_io_types": { 00:37:27.568 "read": true, 00:37:27.568 "write": true, 00:37:27.568 "unmap": false, 00:37:27.568 "flush": false, 00:37:27.568 "reset": true, 00:37:27.568 "nvme_admin": false, 00:37:27.568 "nvme_io": false, 00:37:27.568 "nvme_io_md": false, 00:37:27.568 "write_zeroes": true, 00:37:27.568 "zcopy": false, 00:37:27.568 "get_zone_info": false, 00:37:27.568 "zone_management": false, 00:37:27.568 "zone_append": false, 00:37:27.568 "compare": false, 00:37:27.568 "compare_and_write": false, 00:37:27.568 "abort": false, 00:37:27.568 "seek_hole": false, 00:37:27.568 "seek_data": false, 00:37:27.568 "copy": false, 00:37:27.568 "nvme_iov_md": false 00:37:27.568 }, 00:37:27.568 "memory_domains": [ 00:37:27.568 { 00:37:27.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:27.568 "dma_device_type": 2 00:37:27.568 } 00:37:27.568 ], 00:37:27.568 "driver_specific": { 00:37:27.568 "crypto": { 00:37:27.568 "base_bdev_name": "EE_base0", 00:37:27.568 "name": "crypt0", 00:37:27.568 "key_name": "test_dek_sw" 00:37:27.568 } 00:37:27.568 } 00:37:27.568 } 00:37:27.568 ] 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=3611090 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:37:27.568 10:47:40 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:27.568 Running I/O for 5 seconds... 00:37:28.505 10:47:41 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:37:28.505 10:47:41 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.505 10:47:41 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:28.505 10:47:41 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.505 10:47:41 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 3611090 00:37:32.695 00:37:32.696 Latency(us) 00:37:32.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.696 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:37:32.696 crypt0 : 5.00 39082.36 152.67 0.00 0.00 815.32 386.66 1081.34 00:37:32.696 =================================================================================================================== 00:37:32.696 Total : 39082.36 152.67 0.00 0.00 815.32 386.66 1081.34 00:37:32.696 0 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 3610835 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 3610835 ']' 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 3610835 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3610835 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3610835' 00:37:32.696 killing process with pid 3610835 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 3610835 00:37:32.696 Received shutdown signal, test time was about 5.000000 seconds 00:37:32.696 00:37:32.696 Latency(us) 00:37:32.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.696 =================================================================================================================== 00:37:32.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 3610835 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:37:32.696 00:37:32.696 real 0m6.340s 00:37:32.696 user 0m6.534s 00:37:32.696 sys 0m0.362s 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:32.696 10:47:45 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:37:32.696 ************************************ 00:37:32.696 END TEST bdev_crypto_enomem 00:37:32.696 ************************************ 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:37:32.956 10:47:45 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:37:32.956 00:37:32.956 real 0m53.285s 00:37:32.956 user 1m47.832s 00:37:32.956 sys 0m6.418s 00:37:32.956 10:47:45 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:32.956 10:47:45 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:37:32.956 ************************************ 00:37:32.956 END TEST blockdev_crypto_sw 00:37:32.956 ************************************ 00:37:32.956 10:47:45 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:37:32.956 10:47:45 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:32.956 10:47:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:32.956 10:47:45 -- common/autotest_common.sh@10 -- # set +x 00:37:32.956 ************************************ 00:37:32.956 START TEST blockdev_crypto_qat 00:37:32.956 ************************************ 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:37:32.956 * Looking for test storage... 00:37:32.956 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3612008 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:37:32.956 10:47:45 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 3612008 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 3612008 ']' 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.956 10:47:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:33.216 [2024-07-26 10:47:45.913153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:33.216 [2024-07-26 10:47:45.913219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612008 ] 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:33.216 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:33.216 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:33.216 [2024-07-26 10:47:46.047106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.216 [2024-07-26 10:47:46.091775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.152 10:47:46 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:34.152 10:47:46 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:37:34.152 10:47:46 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:37:34.152 10:47:46 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:37:34.152 10:47:46 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:37:34.152 10:47:46 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.152 10:47:46 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:34.152 [2024-07-26 10:47:46.813993] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:37:34.152 [2024-07-26 10:47:46.822026] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:37:34.152 [2024-07-26 10:47:46.830044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:37:34.152 [2024-07-26 10:47:46.904583] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:37:36.768 true 00:37:36.768 true 00:37:36.768 true 00:37:36.768 true 00:37:36.768 Malloc0 00:37:36.768 Malloc1 00:37:36.768 Malloc2 00:37:36.768 Malloc3 00:37:36.768 [2024-07-26 10:47:49.375314] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:37:36.768 crypto_ram 00:37:36.768 [2024-07-26 10:47:49.383333] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:37:36.768 crypto_ram1 00:37:36.768 [2024-07-26 10:47:49.391355] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:37:36.768 crypto_ram2 00:37:36.768 [2024-07-26 10:47:49.399377] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:37:36.768 crypto_ram3 00:37:36.768 [ 00:37:36.768 { 00:37:36.768 "name": "Malloc1", 00:37:36.768 "aliases": [ 00:37:36.768 "d63f6cb7-ada1-4950-99a0-94d8a84a640f" 00:37:36.768 ], 00:37:36.768 "product_name": "Malloc disk", 00:37:36.768 "block_size": 512, 00:37:36.768 "num_blocks": 65536, 00:37:36.768 "uuid": "d63f6cb7-ada1-4950-99a0-94d8a84a640f", 00:37:36.768 "assigned_rate_limits": { 00:37:36.768 "rw_ios_per_sec": 0, 00:37:36.768 "rw_mbytes_per_sec": 0, 00:37:36.768 "r_mbytes_per_sec": 0, 00:37:36.768 "w_mbytes_per_sec": 0 00:37:36.768 }, 00:37:36.768 "claimed": true, 00:37:36.768 "claim_type": "exclusive_write", 00:37:36.768 "zoned": false, 00:37:36.768 "supported_io_types": { 00:37:36.768 "read": true, 00:37:36.768 "write": true, 00:37:36.768 "unmap": true, 00:37:36.768 "flush": true, 00:37:36.768 "reset": true, 00:37:36.768 "nvme_admin": false, 00:37:36.768 "nvme_io": false, 00:37:36.768 "nvme_io_md": false, 00:37:36.768 "write_zeroes": true, 00:37:36.768 "zcopy": true, 00:37:36.768 "get_zone_info": false, 00:37:36.768 "zone_management": false, 00:37:36.768 "zone_append": false, 00:37:36.768 "compare": false, 00:37:36.768 "compare_and_write": false, 00:37:36.768 "abort": true, 00:37:36.768 "seek_hole": false, 00:37:36.768 "seek_data": false, 00:37:36.768 "copy": true, 00:37:36.768 "nvme_iov_md": false 00:37:36.768 }, 00:37:36.768 "memory_domains": [ 00:37:36.768 { 00:37:36.768 "dma_device_id": "system", 00:37:36.768 "dma_device_type": 1 00:37:36.768 }, 00:37:36.768 { 00:37:36.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:36.768 "dma_device_type": 2 00:37:36.768 } 00:37:36.768 ], 00:37:36.768 "driver_specific": {} 00:37:36.768 } 00:37:36.768 ] 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:36.768 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:37:36.768 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:37:36.769 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f9df31e8-4a2f-5647-a154-a2d9070d19ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9df31e8-4a2f-5647-a154-a2d9070d19ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ae4b1be0-24cd-5786-b3a0-47b26c89bca6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae4b1be0-24cd-5786-b3a0-47b26c89bca6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "6385c116-e1c6-5231-b42e-fe8a4a9934ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6385c116-e1c6-5231-b42e-fe8a4a9934ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e1813aa1-e244-54b5-ae0a-bb9429851e79"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e1813aa1-e244-54b5-ae0a-bb9429851e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:37:36.769 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:37:36.769 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:37:36.769 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:37:36.769 10:47:49 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 3612008 00:37:36.769 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 3612008 ']' 00:37:36.769 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 3612008 00:37:36.769 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:37:36.769 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.769 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3612008 00:37:37.028 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:37.028 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:37.028 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3612008' 00:37:37.028 killing process with pid 3612008 00:37:37.028 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 3612008 00:37:37.028 10:47:49 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 3612008 00:37:37.287 10:47:50 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:37.287 10:47:50 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:37:37.287 10:47:50 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:37.287 10:47:50 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.287 10:47:50 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:37.287 ************************************ 00:37:37.287 START TEST bdev_hello_world 00:37:37.287 ************************************ 00:37:37.287 10:47:50 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:37:37.545 [2024-07-26 10:47:50.232706] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:37.545 [2024-07-26 10:47:50.232761] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612752 ] 00:37:37.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.545 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:37.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.545 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:37.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.545 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:37.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.546 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:37.546 [2024-07-26 10:47:50.367667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.546 [2024-07-26 10:47:50.410925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.546 [2024-07-26 10:47:50.432175] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:37:37.546 [2024-07-26 10:47:50.440196] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:37:37.805 [2024-07-26 10:47:50.448213] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:37:37.805 [2024-07-26 10:47:50.553608] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:37:40.338 [2024-07-26 10:47:52.879382] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:37:40.338 [2024-07-26 10:47:52.879450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:40.338 [2024-07-26 10:47:52.879465] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:40.338 [2024-07-26 10:47:52.887401] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:37:40.338 [2024-07-26 10:47:52.887424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:40.338 [2024-07-26 10:47:52.887435] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:40.338 [2024-07-26 10:47:52.895421] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:37:40.338 [2024-07-26 10:47:52.895441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:37:40.338 [2024-07-26 10:47:52.895452] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:40.338 [2024-07-26 10:47:52.903441] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:37:40.338 [2024-07-26 10:47:52.903457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:37:40.338 [2024-07-26 10:47:52.903467] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:40.338 [2024-07-26 10:47:52.975281] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:37:40.338 [2024-07-26 10:47:52.975325] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:37:40.338 [2024-07-26 10:47:52.975342] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:37:40.338 [2024-07-26 10:47:52.976653] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:37:40.338 [2024-07-26 10:47:52.976720] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:37:40.338 [2024-07-26 10:47:52.976735] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:37:40.338 [2024-07-26 10:47:52.976775] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:37:40.338 00:37:40.338 [2024-07-26 10:47:52.976792] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:37:40.597 00:37:40.597 real 0m3.085s 00:37:40.597 user 0m2.556s 00:37:40.597 sys 0m0.491s 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:37:40.597 ************************************ 00:37:40.597 END TEST bdev_hello_world 00:37:40.597 ************************************ 00:37:40.597 10:47:53 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:37:40.597 10:47:53 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:40.597 10:47:53 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:40.597 10:47:53 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:40.597 ************************************ 00:37:40.597 START TEST bdev_bounds 00:37:40.597 ************************************ 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3613293 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3613293' 00:37:40.597 Process bdevio pid: 3613293 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3613293 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3613293 ']' 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.597 10:47:53 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:40.597 [2024-07-26 10:47:53.405837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:40.597 [2024-07-26 10:47:53.405901] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613293 ] 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.597 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:40.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.598 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:40.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.598 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:40.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.598 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:40.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.598 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:40.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:40.598 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:40.856 [2024-07-26 10:47:53.542826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:40.856 [2024-07-26 10:47:53.588451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.856 [2024-07-26 10:47:53.588547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.856 [2024-07-26 10:47:53.588551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.856 [2024-07-26 10:47:53.609844] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:37:40.856 [2024-07-26 10:47:53.617873] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:37:40.856 [2024-07-26 10:47:53.625891] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:37:40.856 [2024-07-26 10:47:53.724697] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:37:43.387 [2024-07-26 10:47:56.036619] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:37:43.387 [2024-07-26 10:47:56.036691] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:43.387 [2024-07-26 10:47:56.036705] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:43.387 [2024-07-26 10:47:56.044640] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:37:43.387 [2024-07-26 10:47:56.044657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:43.387 [2024-07-26 10:47:56.044668] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:43.387 [2024-07-26 10:47:56.052660] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:37:43.387 [2024-07-26 10:47:56.052678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:37:43.387 [2024-07-26 10:47:56.052688] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:43.387 [2024-07-26 10:47:56.060684] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:37:43.387 [2024-07-26 10:47:56.060700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:37:43.387 [2024-07-26 10:47:56.060710] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:43.387 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:43.387 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:37:43.387 10:47:56 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:37:43.387 I/O targets: 00:37:43.387 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:37:43.387 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:37:43.387 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:37:43.387 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:37:43.387 00:37:43.387 00:37:43.387 CUnit - A unit testing framework for C - Version 2.1-3 00:37:43.387 http://cunit.sourceforge.net/ 00:37:43.387 00:37:43.387 00:37:43.387 Suite: bdevio tests on: crypto_ram3 00:37:43.387 Test: blockdev write read block ...passed 00:37:43.387 Test: blockdev write zeroes read block ...passed 00:37:43.387 Test: blockdev write zeroes read no split ...passed 00:37:43.387 Test: blockdev write zeroes read split ...passed 00:37:43.387 Test: blockdev write zeroes read split partial ...passed 00:37:43.387 Test: blockdev reset ...passed 00:37:43.387 Test: blockdev write read 8 blocks ...passed 00:37:43.387 Test: blockdev write read size > 128k ...passed 00:37:43.387 Test: blockdev write read invalid size ...passed 00:37:43.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:43.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:43.387 Test: blockdev write read max offset ...passed 00:37:43.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:43.387 Test: blockdev writev readv 8 blocks ...passed 00:37:43.387 Test: blockdev writev readv 30 x 1block ...passed 00:37:43.387 Test: blockdev writev readv block ...passed 00:37:43.387 Test: blockdev writev readv size > 128k ...passed 00:37:43.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:43.387 Test: blockdev comparev and writev ...passed 00:37:43.387 Test: blockdev nvme passthru rw ...passed 00:37:43.387 Test: blockdev nvme passthru vendor specific ...passed 00:37:43.387 Test: blockdev nvme admin passthru ...passed 00:37:43.387 Test: blockdev copy ...passed 00:37:43.387 Suite: bdevio tests on: crypto_ram2 00:37:43.387 Test: blockdev write read block ...passed 00:37:43.387 Test: blockdev write zeroes read block ...passed 00:37:43.387 Test: blockdev write zeroes read no split ...passed 00:37:43.646 Test: blockdev write zeroes read split ...passed 00:37:43.646 Test: blockdev write zeroes read split partial ...passed 00:37:43.646 Test: blockdev reset ...passed 00:37:43.646 Test: blockdev write read 8 blocks ...passed 00:37:43.646 Test: blockdev write read size > 128k ...passed 00:37:43.646 Test: blockdev write read invalid size ...passed 00:37:43.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:43.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:43.647 Test: blockdev write read max offset ...passed 00:37:43.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:43.647 Test: blockdev writev readv 8 blocks ...passed 00:37:43.647 Test: blockdev writev readv 30 x 1block ...passed 00:37:43.647 Test: blockdev writev readv block ...passed 00:37:43.647 Test: blockdev writev readv size > 128k ...passed 00:37:43.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:43.647 Test: blockdev comparev and writev ...passed 00:37:43.647 Test: blockdev nvme passthru rw ...passed 00:37:43.647 Test: blockdev nvme passthru vendor specific ...passed 00:37:43.647 Test: blockdev nvme admin passthru ...passed 00:37:43.647 Test: blockdev copy ...passed 00:37:43.647 Suite: bdevio tests on: crypto_ram1 00:37:43.647 Test: blockdev write read block ...passed 00:37:43.647 Test: blockdev write zeroes read block ...passed 00:37:43.647 Test: blockdev write zeroes read no split ...passed 00:37:43.647 Test: blockdev write zeroes read split ...passed 00:37:43.647 Test: blockdev write zeroes read split partial ...passed 00:37:43.647 Test: blockdev reset ...passed 00:37:43.647 Test: blockdev write read 8 blocks ...passed 00:37:43.647 Test: blockdev write read size > 128k ...passed 00:37:43.647 Test: blockdev write read invalid size ...passed 00:37:43.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:43.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:43.647 Test: blockdev write read max offset ...passed 00:37:43.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:43.647 Test: blockdev writev readv 8 blocks ...passed 00:37:43.647 Test: blockdev writev readv 30 x 1block ...passed 00:37:43.647 Test: blockdev writev readv block ...passed 00:37:43.647 Test: blockdev writev readv size > 128k ...passed 00:37:43.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:43.647 Test: blockdev comparev and writev ...passed 00:37:43.647 Test: blockdev nvme passthru rw ...passed 00:37:43.647 Test: blockdev nvme passthru vendor specific ...passed 00:37:43.647 Test: blockdev nvme admin passthru ...passed 00:37:43.647 Test: blockdev copy ...passed 00:37:43.647 Suite: bdevio tests on: crypto_ram 00:37:43.647 Test: blockdev write read block ...passed 00:37:43.647 Test: blockdev write zeroes read block ...passed 00:37:43.647 Test: blockdev write zeroes read no split ...passed 00:37:43.647 Test: blockdev write zeroes read split ...passed 00:37:43.647 Test: blockdev write zeroes read split partial ...passed 00:37:43.647 Test: blockdev reset ...passed 00:37:43.647 Test: blockdev write read 8 blocks ...passed 00:37:43.647 Test: blockdev write read size > 128k ...passed 00:37:43.647 Test: blockdev write read invalid size ...passed 00:37:43.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:43.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:43.647 Test: blockdev write read max offset ...passed 00:37:43.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:43.647 Test: blockdev writev readv 8 blocks ...passed 00:37:43.647 Test: blockdev writev readv 30 x 1block ...passed 00:37:43.647 Test: blockdev writev readv block ...passed 00:37:43.647 Test: blockdev writev readv size > 128k ...passed 00:37:43.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:43.647 Test: blockdev comparev and writev ...passed 00:37:43.647 Test: blockdev nvme passthru rw ...passed 00:37:43.647 Test: blockdev nvme passthru vendor specific ...passed 00:37:43.647 Test: blockdev nvme admin passthru ...passed 00:37:43.647 Test: blockdev copy ...passed 00:37:43.647 00:37:43.647 Run Summary: Type Total Ran Passed Failed Inactive 00:37:43.647 suites 4 4 n/a 0 0 00:37:43.647 tests 92 92 92 0 0 00:37:43.647 asserts 520 520 520 0 n/a 00:37:43.647 00:37:43.647 Elapsed time = 0.467 seconds 00:37:43.647 0 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3613293 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3613293 ']' 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3613293 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:43.647 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3613293 00:37:43.906 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:43.906 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:43.906 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3613293' 00:37:43.906 killing process with pid 3613293 00:37:43.906 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3613293 00:37:43.906 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3613293 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:37:44.166 00:37:44.166 real 0m3.504s 00:37:44.166 user 0m9.737s 00:37:44.166 sys 0m0.662s 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:37:44.166 ************************************ 00:37:44.166 END TEST bdev_bounds 00:37:44.166 ************************************ 00:37:44.166 10:47:56 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:37:44.166 10:47:56 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:37:44.166 10:47:56 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:44.166 10:47:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:44.166 ************************************ 00:37:44.166 START TEST bdev_nbd 00:37:44.166 ************************************ 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3613855 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3613855 /var/tmp/spdk-nbd.sock 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3613855 ']' 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:37:44.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:44.166 10:47:56 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:37:44.166 [2024-07-26 10:47:56.990626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:37:44.166 [2024-07-26 10:47:56.990681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.166 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:44.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:44.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.167 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:44.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:44.425 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:44.426 [2024-07-26 10:47:57.123005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.426 [2024-07-26 10:47:57.167295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.426 [2024-07-26 10:47:57.188588] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:37:44.426 [2024-07-26 10:47:57.196616] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:37:44.426 [2024-07-26 10:47:57.204635] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:37:44.426 [2024-07-26 10:47:57.310903] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:37:46.959 [2024-07-26 10:47:59.636197] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:37:46.959 [2024-07-26 10:47:59.636262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:37:46.959 [2024-07-26 10:47:59.636277] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:46.959 [2024-07-26 10:47:59.644215] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:37:46.959 [2024-07-26 10:47:59.644237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:37:46.959 [2024-07-26 10:47:59.644248] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:46.959 [2024-07-26 10:47:59.652234] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:37:46.959 [2024-07-26 10:47:59.652250] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:37:46.959 [2024-07-26 10:47:59.652260] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:46.959 [2024-07-26 10:47:59.660255] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:37:46.959 [2024-07-26 10:47:59.660271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:37:46.959 [2024-07-26 10:47:59.660281] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:37:46.959 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:37:46.960 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:37:46.960 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:37:46.960 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:37:46.960 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:47.218 10:47:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:47.218 1+0 records in 00:37:47.218 1+0 records out 00:37:47.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289467 s, 14.2 MB/s 00:37:47.218 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:37:47.219 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:47.477 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:47.477 1+0 records in 00:37:47.477 1+0 records out 00:37:47.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312479 s, 13.1 MB/s 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:37:47.478 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:47.736 1+0 records in 00:37:47.736 1+0 records out 00:37:47.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314806 s, 13.0 MB/s 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:37:47.736 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:47.995 1+0 records in 00:37:47.995 1+0 records out 00:37:47.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374347 s, 10.9 MB/s 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:37:47.995 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:48.253 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd0", 00:37:48.253 "bdev_name": "crypto_ram" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd1", 00:37:48.253 "bdev_name": "crypto_ram1" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd2", 00:37:48.253 "bdev_name": "crypto_ram2" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd3", 00:37:48.253 "bdev_name": "crypto_ram3" 00:37:48.253 } 00:37:48.253 ]' 00:37:48.253 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:37:48.253 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd0", 00:37:48.253 "bdev_name": "crypto_ram" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd1", 00:37:48.253 "bdev_name": "crypto_ram1" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd2", 00:37:48.253 "bdev_name": "crypto_ram2" 00:37:48.253 }, 00:37:48.253 { 00:37:48.253 "nbd_device": "/dev/nbd3", 00:37:48.253 "bdev_name": "crypto_ram3" 00:37:48.253 } 00:37:48.253 ]' 00:37:48.253 10:48:00 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:48.253 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:48.511 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:48.769 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:49.028 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:49.287 10:48:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:37:49.287 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:49.287 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:49.287 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:49.287 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:49.287 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:37:49.545 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:37:49.546 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:37:49.805 /dev/nbd0 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:49.805 1+0 records in 00:37:49.805 1+0 records out 00:37:49.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287303 s, 14.3 MB/s 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:37:49.805 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:37:50.064 /dev/nbd1 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:50.064 1+0 records in 00:37:50.064 1+0 records out 00:37:50.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316817 s, 12.9 MB/s 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:37:50.064 10:48:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:37:50.323 /dev/nbd10 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:50.323 1+0 records in 00:37:50.323 1+0 records out 00:37:50.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247657 s, 16.5 MB/s 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:37:50.323 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:37:50.583 /dev/nbd11 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:50.583 1+0 records in 00:37:50.583 1+0 records out 00:37:50.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027029 s, 15.2 MB/s 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:50.583 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd0", 00:37:50.842 "bdev_name": "crypto_ram" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd1", 00:37:50.842 "bdev_name": "crypto_ram1" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd10", 00:37:50.842 "bdev_name": "crypto_ram2" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd11", 00:37:50.842 "bdev_name": "crypto_ram3" 00:37:50.842 } 00:37:50.842 ]' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd0", 00:37:50.842 "bdev_name": "crypto_ram" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd1", 00:37:50.842 "bdev_name": "crypto_ram1" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd10", 00:37:50.842 "bdev_name": "crypto_ram2" 00:37:50.842 }, 00:37:50.842 { 00:37:50.842 "nbd_device": "/dev/nbd11", 00:37:50.842 "bdev_name": "crypto_ram3" 00:37:50.842 } 00:37:50.842 ]' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:37:50.842 /dev/nbd1 00:37:50.842 /dev/nbd10 00:37:50.842 /dev/nbd11' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:37:50.842 /dev/nbd1 00:37:50.842 /dev/nbd10 00:37:50.842 /dev/nbd11' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:37:50.842 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:37:50.843 256+0 records in 00:37:50.843 256+0 records out 00:37:50.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113805 s, 92.1 MB/s 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:50.843 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:37:51.102 256+0 records in 00:37:51.102 256+0 records out 00:37:51.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722512 s, 14.5 MB/s 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:37:51.102 256+0 records in 00:37:51.102 256+0 records out 00:37:51.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0586608 s, 17.9 MB/s 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:37:51.102 256+0 records in 00:37:51.102 256+0 records out 00:37:51.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332992 s, 31.5 MB/s 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:37:51.102 256+0 records in 00:37:51.102 256+0 records out 00:37:51.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0487766 s, 21.5 MB/s 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:51.102 10:48:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:51.361 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:51.362 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:37:51.622 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:51.622 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:51.623 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:37:51.881 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:51.882 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:52.141 10:48:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:37:52.400 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:37:52.401 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:37:52.660 malloc_lvol_verify 00:37:52.660 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:37:52.919 b08feef4-2866-4a20-aa93-8f7818ddff6c 00:37:52.919 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:37:53.177 0e1b980e-1b5c-4c9e-b9ae-0f7c2c752948 00:37:53.177 10:48:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:37:53.436 /dev/nbd0 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:37:53.436 mke2fs 1.46.5 (30-Dec-2021) 00:37:53.436 Discarding device blocks: 0/4096 done 00:37:53.436 Creating filesystem with 4096 1k blocks and 1024 inodes 00:37:53.436 00:37:53.436 Allocating group tables: 0/1 done 00:37:53.436 Writing inode tables: 0/1 done 00:37:53.436 Creating journal (1024 blocks): done 00:37:53.436 Writing superblocks and filesystem accounting information: 0/1 done 00:37:53.436 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:53.436 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3613855 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3613855 ']' 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3613855 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:53.694 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3613855 00:37:53.695 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:53.695 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:53.695 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3613855' 00:37:53.695 killing process with pid 3613855 00:37:53.695 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3613855 00:37:53.695 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3613855 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:37:53.954 00:37:53.954 real 0m9.813s 00:37:53.954 user 0m12.548s 00:37:53.954 sys 0m3.944s 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:37:53.954 ************************************ 00:37:53.954 END TEST bdev_nbd 00:37:53.954 ************************************ 00:37:53.954 10:48:06 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:37:53.954 10:48:06 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:37:53.954 10:48:06 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:37:53.954 10:48:06 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:37:53.954 10:48:06 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:53.954 10:48:06 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:53.954 10:48:06 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:37:53.954 ************************************ 00:37:53.954 START TEST bdev_fio 00:37:53.954 ************************************ 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:37:53.954 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:37:53.954 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:37:54.213 ************************************ 00:37:54.213 START TEST bdev_fio_rw_verify 00:37:54.213 ************************************ 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.213 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:54.214 10:48:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.214 10:48:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.214 10:48:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.214 10:48:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:54.214 10:48:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:37:54.782 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:54.782 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:54.782 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:54.782 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:37:54.782 fio-3.35 00:37:54.782 Starting 4 threads 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:54.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:54.782 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:09.657 00:38:09.657 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3616831: Fri Jul 26 10:48:20 2024 00:38:09.657 read: IOPS=28.9k, BW=113MiB/s (118MB/s)(1127MiB/10001msec) 00:38:09.657 slat (usec): min=15, max=1080, avg=46.18, stdev=20.92 00:38:09.657 clat (usec): min=20, max=1601, avg=272.91, stdev=160.16 00:38:09.657 lat (usec): min=53, max=1652, avg=319.09, stdev=169.75 00:38:09.657 clat percentiles (usec): 00:38:09.657 | 50.000th=[ 227], 99.000th=[ 775], 99.900th=[ 947], 99.990th=[ 1057], 00:38:09.657 | 99.999th=[ 1434] 00:38:09.657 write: IOPS=31.8k, BW=124MiB/s (130MB/s)(1210MiB/9749msec); 0 zone resets 00:38:09.657 slat (usec): min=22, max=323, avg=56.52, stdev=20.09 00:38:09.657 clat (usec): min=17, max=1363, avg=305.41, stdev=161.09 00:38:09.657 lat (usec): min=53, max=1537, avg=361.93, stdev=170.24 00:38:09.657 clat percentiles (usec): 00:38:09.657 | 50.000th=[ 273], 99.000th=[ 783], 99.900th=[ 938], 99.990th=[ 1074], 00:38:09.657 | 99.999th=[ 1254] 00:38:09.657 bw ( KiB/s): min=95440, max=149532, per=98.52%, avg=125212.00, stdev=5000.06, samples=76 00:38:09.657 iops : min=23860, max=37379, avg=31303.00, stdev=1249.99, samples=76 00:38:09.657 lat (usec) : 20=0.01%, 50=0.01%, 100=6.07%, 250=43.74%, 500=38.99% 00:38:09.657 lat (usec) : 750=9.82%, 1000=1.33% 00:38:09.657 lat (msec) : 2=0.03% 00:38:09.657 cpu : usr=99.67%, sys=0.01%, ctx=82, majf=0, minf=232 00:38:09.657 IO depths : 1=0.3%, 2=28.5%, 4=56.9%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.657 complete : 0=0.0%, 4=87.5%, 8=12.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.657 issued rwts: total=288623,309756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:09.657 00:38:09.657 Run status group 0 (all jobs): 00:38:09.657 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1127MiB (1182MB), run=10001-10001msec 00:38:09.657 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=1210MiB (1269MB), run=9749-9749msec 00:38:09.657 00:38:09.657 real 0m13.559s 00:38:09.657 user 0m54.397s 00:38:09.657 sys 0m0.635s 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:38:09.657 ************************************ 00:38:09.657 END TEST bdev_fio_rw_verify 00:38:09.657 ************************************ 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f9df31e8-4a2f-5647-a154-a2d9070d19ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9df31e8-4a2f-5647-a154-a2d9070d19ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ae4b1be0-24cd-5786-b3a0-47b26c89bca6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae4b1be0-24cd-5786-b3a0-47b26c89bca6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "6385c116-e1c6-5231-b42e-fe8a4a9934ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6385c116-e1c6-5231-b42e-fe8a4a9934ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e1813aa1-e244-54b5-ae0a-bb9429851e79"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e1813aa1-e244-54b5-ae0a-bb9429851e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:38:09.657 crypto_ram1 00:38:09.657 crypto_ram2 00:38:09.657 crypto_ram3 ]] 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "f9df31e8-4a2f-5647-a154-a2d9070d19ee"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f9df31e8-4a2f-5647-a154-a2d9070d19ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ae4b1be0-24cd-5786-b3a0-47b26c89bca6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae4b1be0-24cd-5786-b3a0-47b26c89bca6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "6385c116-e1c6-5231-b42e-fe8a4a9934ff"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6385c116-e1c6-5231-b42e-fe8a4a9934ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e1813aa1-e244-54b5-ae0a-bb9429851e79"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "e1813aa1-e244-54b5-ae0a-bb9429851e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:38:09.657 ************************************ 00:38:09.657 START TEST bdev_fio_trim 00:38:09.657 ************************************ 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:09.657 10:48:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:38:09.657 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:09.657 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:09.657 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:09.657 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:09.657 fio-3.35 00:38:09.658 Starting 4 threads 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:09.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:09.658 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:21.844 00:38:21.844 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3619325: Fri Jul 26 10:48:33 2024 00:38:21.844 write: IOPS=38.6k, BW=151MiB/s (158MB/s)(1509MiB/10001msec); 0 zone resets 00:38:21.844 slat (usec): min=16, max=1277, avg=59.95, stdev=27.48 00:38:21.844 clat (usec): min=40, max=1551, avg=218.82, stdev=112.14 00:38:21.844 lat (usec): min=58, max=1594, avg=278.77, stdev=124.48 00:38:21.844 clat percentiles (usec): 00:38:21.844 | 50.000th=[ 202], 99.000th=[ 545], 99.900th=[ 611], 99.990th=[ 652], 00:38:21.844 | 99.999th=[ 881] 00:38:21.844 bw ( KiB/s): min=147936, max=220416, per=100.00%, avg=154930.53, stdev=4210.36, samples=76 00:38:21.844 iops : min=36984, max=55104, avg=38732.63, stdev=1052.59, samples=76 00:38:21.844 trim: IOPS=38.6k, BW=151MiB/s (158MB/s)(1509MiB/10001msec); 0 zone resets 00:38:21.844 slat (usec): min=5, max=404, avg=17.32, stdev= 6.02 00:38:21.844 clat (usec): min=36, max=1595, avg=278.93, stdev=124.49 00:38:21.844 lat (usec): min=42, max=1611, avg=296.25, stdev=125.63 00:38:21.844 clat percentiles (usec): 00:38:21.844 | 50.000th=[ 260], 99.000th=[ 652], 99.900th=[ 725], 99.990th=[ 775], 00:38:21.844 | 99.999th=[ 1418] 00:38:21.844 bw ( KiB/s): min=147936, max=220416, per=100.00%, avg=154930.53, stdev=4210.36, samples=76 00:38:21.844 iops : min=36984, max=55104, avg=38732.63, stdev=1052.59, samples=76 00:38:21.844 lat (usec) : 50=0.27%, 100=7.92%, 250=47.66%, 500=39.78%, 750=4.35% 00:38:21.844 lat (usec) : 1000=0.02% 00:38:21.844 lat (msec) : 2=0.01% 00:38:21.844 cpu : usr=99.62%, sys=0.00%, ctx=101, majf=0, minf=105 00:38:21.844 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:21.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.844 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:21.844 issued rwts: total=0,386419,386420,0 short=0,0,0,0 dropped=0,0,0,0 00:38:21.844 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:21.844 00:38:21.844 Run status group 0 (all jobs): 00:38:21.844 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=1509MiB (1583MB), run=10001-10001msec 00:38:21.844 TRIM: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=1509MiB (1583MB), run=10001-10001msec 00:38:21.844 00:38:21.844 real 0m13.557s 00:38:21.844 user 0m53.983s 00:38:21.844 sys 0m0.617s 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:38:21.844 ************************************ 00:38:21.844 END TEST bdev_fio_trim 00:38:21.844 ************************************ 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:38:21.844 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:38:21.844 00:38:21.844 real 0m27.484s 00:38:21.844 user 1m48.565s 00:38:21.844 sys 0m1.458s 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:21.844 10:48:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:38:21.844 ************************************ 00:38:21.844 END TEST bdev_fio 00:38:21.844 ************************************ 00:38:21.844 10:48:34 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:21.844 10:48:34 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:21.844 10:48:34 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:38:21.844 10:48:34 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:21.844 10:48:34 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:21.844 ************************************ 00:38:21.845 START TEST bdev_verify 00:38:21.845 ************************************ 00:38:21.845 10:48:34 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:21.845 [2024-07-26 10:48:34.452124] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:21.845 [2024-07-26 10:48:34.452183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620952 ] 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:21.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:21.845 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:21.845 [2024-07-26 10:48:34.584520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:21.845 [2024-07-26 10:48:34.631466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.845 [2024-07-26 10:48:34.631471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.845 [2024-07-26 10:48:34.652795] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:38:21.845 [2024-07-26 10:48:34.660824] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:21.845 [2024-07-26 10:48:34.668844] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:22.101 [2024-07-26 10:48:34.766615] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:38:24.622 [2024-07-26 10:48:37.092911] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:38:24.622 [2024-07-26 10:48:37.092973] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:24.622 [2024-07-26 10:48:37.092987] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:24.622 [2024-07-26 10:48:37.100930] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:38:24.622 [2024-07-26 10:48:37.100948] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:24.622 [2024-07-26 10:48:37.100959] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:24.622 [2024-07-26 10:48:37.108953] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:38:24.622 [2024-07-26 10:48:37.108971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:24.622 [2024-07-26 10:48:37.108981] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:24.622 [2024-07-26 10:48:37.116977] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:38:24.622 [2024-07-26 10:48:37.116994] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:24.622 [2024-07-26 10:48:37.117004] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:24.622 Running I/O for 5 seconds... 00:38:29.943 00:38:29.943 Latency(us) 00:38:29.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.943 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x0 length 0x1000 00:38:29.943 crypto_ram : 5.07 530.43 2.07 0.00 0.00 240668.05 15309.21 152672.67 00:38:29.943 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x1000 length 0x1000 00:38:29.943 crypto_ram : 5.07 530.51 2.07 0.00 0.00 240612.50 17930.65 152672.67 00:38:29.943 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x0 length 0x1000 00:38:29.943 crypto_ram1 : 5.07 530.09 2.07 0.00 0.00 239730.65 17406.36 139250.89 00:38:29.943 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x1000 length 0x1000 00:38:29.943 crypto_ram1 : 5.07 530.38 2.07 0.00 0.00 239673.54 17406.36 138412.03 00:38:29.943 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x0 length 0x1000 00:38:29.943 crypto_ram2 : 5.05 4154.12 16.23 0.00 0.00 30479.70 6973.03 25899.83 00:38:29.943 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x1000 length 0x1000 00:38:29.943 crypto_ram2 : 5.06 4163.09 16.26 0.00 0.00 30410.36 2647.65 25899.83 00:38:29.943 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x0 length 0x1000 00:38:29.943 crypto_ram3 : 5.06 4170.83 16.29 0.00 0.00 30294.22 1848.12 26528.97 00:38:29.943 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:29.943 Verification LBA range: start 0x1000 length 0x1000 00:38:29.943 crypto_ram3 : 5.06 4171.67 16.30 0.00 0.00 30282.37 1874.33 26424.12 00:38:29.943 =================================================================================================================== 00:38:29.943 Total : 18781.11 73.36 0.00 0.00 54101.94 1848.12 152672.67 00:38:29.943 00:38:29.943 real 0m8.201s 00:38:29.943 user 0m15.516s 00:38:29.943 sys 0m0.480s 00:38:29.943 10:48:42 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:29.943 10:48:42 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:38:29.943 ************************************ 00:38:29.943 END TEST bdev_verify 00:38:29.943 ************************************ 00:38:29.943 10:48:42 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:29.943 10:48:42 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:38:29.943 10:48:42 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:29.943 10:48:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:29.943 ************************************ 00:38:29.943 START TEST bdev_verify_big_io 00:38:29.943 ************************************ 00:38:29.943 10:48:42 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:29.943 [2024-07-26 10:48:42.743120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:29.943 [2024-07-26 10:48:42.743194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622280 ] 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:29.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:29.943 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:30.202 [2024-07-26 10:48:42.876597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:30.202 [2024-07-26 10:48:42.921312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.202 [2024-07-26 10:48:42.921317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.202 [2024-07-26 10:48:42.942652] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:38:30.202 [2024-07-26 10:48:42.950682] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:30.202 [2024-07-26 10:48:42.958704] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:30.202 [2024-07-26 10:48:43.056164] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:38:32.731 [2024-07-26 10:48:45.375920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:38:32.731 [2024-07-26 10:48:45.375988] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:32.731 [2024-07-26 10:48:45.376003] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:32.731 [2024-07-26 10:48:45.383935] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:38:32.731 [2024-07-26 10:48:45.383952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:32.732 [2024-07-26 10:48:45.383963] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:32.732 [2024-07-26 10:48:45.391959] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:38:32.732 [2024-07-26 10:48:45.391976] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:32.732 [2024-07-26 10:48:45.391986] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:32.732 [2024-07-26 10:48:45.399982] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:38:32.732 [2024-07-26 10:48:45.400003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:32.732 [2024-07-26 10:48:45.400013] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:32.732 Running I/O for 5 seconds... 00:38:33.668 [2024-07-26 10:48:46.299831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.300822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.304681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.305075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.305089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.308958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.309304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.309320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.312618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.312667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.312706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.312744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.313679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.317971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.318009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.318435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.318451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.321638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.321694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.321768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.321809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.322724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.325675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.325723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.325776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.325815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.326820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.329809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.329853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.329892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.329931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.668 [2024-07-26 10:48:46.330362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.330404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.330443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.330481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.330808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.330824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.333910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.333953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.333992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.334033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.334469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.334533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.334571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.334609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.335050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.335066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.338849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.339246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.339263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.342893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.343286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.343303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.346864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.347199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.347215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.350728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.351071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.351086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.354836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.355229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.355245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.358920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.359326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.359343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.362837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.363231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.363247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.669 [2024-07-26 10:48:46.366732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.366771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.366809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.367155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.367172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.370764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.371216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.371232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.374734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.375122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.375143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.377887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.377929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.377967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.378938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.381731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.381774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.381814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.381853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.382774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.385543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.385585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.385624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.385662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.386516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.389454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.389509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.389562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.389601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.389986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.390028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.390067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.390106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.390485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.390501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.393883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.394279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.394297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.396848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.396891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.396929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.396969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.397966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.400610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.400652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.400690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.400729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.401155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.401198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.670 [2024-07-26 10:48:46.401236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.401274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.401603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.401618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.404887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.405313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.405330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.407971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.408997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.409016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.411721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.411764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.411802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.411841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.412765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.414968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.415005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.415239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.415254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.417973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.418013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.418418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.418435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.419999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.420749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.423049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.423418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.424170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.425478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.427372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.429060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.430172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.431482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.431714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.431728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.434196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.434557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.436260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.437802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.439596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.440393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.441785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.443360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.443593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.443608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.446173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.447248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.448547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.450111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.451713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.453155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.454474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.456045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.671 [2024-07-26 10:48:46.456283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.456298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.459072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.460507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.462095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.463653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.464675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.465992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.467552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.469107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.469347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.469362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.473063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.474367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.475918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.477487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.479503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.481053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.482693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.484448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.484788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.484803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.488567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.490099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.491656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.492860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.494412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.495956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.497500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.498402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.498838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.498854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.502472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.504037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.505619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.506528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.508540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.510121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.511634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.511992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.512380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.512399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.515965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.517527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.518492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.520103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.521897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.523452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.524021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.524388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.524796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.524811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.528568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.530117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.531356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.532673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.534462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.535654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.536011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.536371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.536785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.536800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.540142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.540938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.542300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.543853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.545610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.545976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.546337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.546693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.547107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.547123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.550225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.551549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.552862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.672 [2024-07-26 10:48:46.554421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.555727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.556103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.556462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.556817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.557205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.557221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.559444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.560751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.562309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.563871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.564469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.564825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.565186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.565546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.565830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.673 [2024-07-26 10:48:46.565846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.933 [2024-07-26 10:48:46.568908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.933 [2024-07-26 10:48:46.570325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.933 [2024-07-26 10:48:46.571878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.573480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.574225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.574583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.574939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.575666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.575901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.575917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.578778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.580318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.581869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.582842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.583605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.583962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.584326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.585911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.586187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.586207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.589264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.590950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.592623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.592981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.593734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.594090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.595056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.596364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.596598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.596613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.599688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.601250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.601970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.602333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.603081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.603443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.605043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.606790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.607024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.607040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.610172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.611669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.612027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.612391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.613125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.614279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.615591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.617143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.617376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.617395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.620482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.621093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.621456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.621814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.622607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.624026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.625599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.627164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.627397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.627412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.630272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.630635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.630992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.631375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.633137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.634461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.636021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.637570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.637952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.637967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.639780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.640146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.640506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.640864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.642461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.644017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.645573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.646644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.646881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.646896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.648773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.649135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.649497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.650220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.651998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.653653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.655379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.656523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.656823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.656838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.658780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.659147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.659505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.661191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.934 [2024-07-26 10:48:46.663006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.664569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.665352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.666757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.666991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.667006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.669112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.669479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.670537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.671853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.673645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.674985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.676422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.677739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.677973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.677988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.680290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.680701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.682123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.683697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.685478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.686265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.687591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.689156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.689389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.689404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.691788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.693216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.694534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.696084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.697351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.699047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.700646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.702307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.702542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.702557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.705392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.706712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.708260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.709823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.711098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.712389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.713930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.715477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.715750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.715765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.719744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.721351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.723044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.724716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.726359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.727919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.729470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.730767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.731098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.731114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.734521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.736078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.737586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.738553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.740095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.741663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.743225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.743799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.744203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.744219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.747720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.749307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.749864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.751166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.752871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.753299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.753657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.754013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.754433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.754453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.756890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.757256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.757641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.758004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.758777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.759136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.759500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.759865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.760216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.760233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.762811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.763182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.763540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.763576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.764378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.764745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.935 [2024-07-26 10:48:46.765106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.765470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.765834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.765849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.768338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.768699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.769055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.769421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.769467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.769819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.770192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.770549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.770903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.771272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.771613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.771629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.773906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.773949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.773988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.774861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.777963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.778366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.778382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.780989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.781029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.781067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.781117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.781535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.781552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.783715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.783758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.783797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.783836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.784812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.786969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.787694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.788088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.788104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.790990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.791346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.791361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.793562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.793604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.793642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.793682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.936 [2024-07-26 10:48:46.794614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.794629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.796750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.796792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.796834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.796874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.797817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.799912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.799954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.799993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.800842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.803638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.804047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.804063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.806965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.807005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.807043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.807082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.807424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.807439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.809627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.809669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.809715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.809754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.810792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.812977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.813969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.816912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.817360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.817379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.819526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.819580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.819620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.819658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.819985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.820570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.822785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.822839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.937 [2024-07-26 10:48:46.822883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.822943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.823892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.826738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.827123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.827142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.829970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.830009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.830402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.830417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.832478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.832520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.832562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.832602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.832990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.833033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.833086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.833125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:33.938 [2024-07-26 10:48:46.833168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.833580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.833596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.835725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.835767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.835808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.835846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.836830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.838990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.839753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.840121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.840136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.842990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.843047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.843322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.843337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.845772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.846179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.846195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.847631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.847671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.847712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.847749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.848471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.849905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.849948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.849985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.850997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.852726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.852766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.199 [2024-07-26 10:48:46.852803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.852840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.853585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.854937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.854976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.855977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.857889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.857929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.857969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.858652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.860667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.861038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.861053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.863975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.865437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.865478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.867890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.870036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.870076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.870113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.871823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.872052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.872067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.875077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.875776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.876134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.876493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.876907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.877618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.878937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.880494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.882058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.882299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.882315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.885081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.885449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.885805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.886182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.886603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.200 [2024-07-26 10:48:46.888312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.889869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.891518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.893267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.893606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.893620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.895406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.895767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.896123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.896485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.896741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.898063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.899601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.901154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.902003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.902239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.902254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.904022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.904386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.904743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.905500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.905735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.907391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.909143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.910763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.911912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.912233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.912248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.914090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.914453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.914814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.916463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.916746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.918306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.919863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.920614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.922075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.922310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.922325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.924384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.924744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.925608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.926927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.927165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.928762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.930251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.931548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.932863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.933096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.933110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.935264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.935624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.937267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.938944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.939183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.940751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.941552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.942866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.944409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.944642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.944661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.946949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.948105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.949436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.950993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.951230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.952448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.954045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.955501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.957111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.957347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.957362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.960078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.961398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.962949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.964489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.964724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.965762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.967079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.968639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.970203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.970489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.970504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.974267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.975703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.977260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.978875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.979265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.980692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.982256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.983821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.985165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.985560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.201 [2024-07-26 10:48:46.985575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.988985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.990535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.992089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.992907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.993160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.994476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.996020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.997578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.997942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.998331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:46.998347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.001966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.003542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.004975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.006330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.006630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.008211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.009771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.010740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.011105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.011510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.011526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.015068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.016628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.017429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.018759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.018992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.020662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.022210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.022567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.022924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.023314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.023329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.026483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.027551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.029264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.030831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.031064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.032645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.033282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.033641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.033997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.034437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.034456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.037511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.038517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.039825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.041362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.041596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.042986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.043348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.043704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.044075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.044466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.044482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.046737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.048309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.050002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.051573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.051807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.052238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.052595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.052950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.053315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.053570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.053584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.056137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.057463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.059026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.060587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.060911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.061286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.061643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.061999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.062895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.063181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.063196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.065903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.067471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.069036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.070283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.070675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.071037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.071400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.071756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.073407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.073640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.073655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.076372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.077916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.079467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.202 [2024-07-26 10:48:47.079832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.080250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.080612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.080968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.082013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.083322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.083554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.083569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.086553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.088118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.089196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.089556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.089966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.090331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.090688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.092198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.093844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.094078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.094092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.097095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.203 [2024-07-26 10:48:47.098786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.465 [2024-07-26 10:48:47.099157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.099514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.099883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.100248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.101524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.102835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.104375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.104612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.104627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.107635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.108499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.108857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.109218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.109636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.110226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.111540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.113081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.114632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.114864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.114879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.117923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.118291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.118649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.119005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.119395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.120831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.122153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.123715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.125274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.125618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.125632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.127656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.128021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.128382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.128737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.129040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.130356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.131916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.133482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.134524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.134759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.134773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.136527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.136887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.137248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.137874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.138109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.139647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.141279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.142996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.144059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.144337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.144352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.146161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.146523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.146880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.148020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.148259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.149834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.150875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.151893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.153471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.153705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.153719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.156118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.156487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.156851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.157213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.157562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.157922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.158287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.158650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.159008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.159428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.159444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.161913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.162282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.162638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.162998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.163386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.163754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.164112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.164472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.164831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.165129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.165148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.167655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.168017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.168399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.168757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.466 [2024-07-26 10:48:47.169150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.169518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.169879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.170248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.170611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.171017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.171033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.173469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.173829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.174195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.174552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.174911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.175284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.175650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.176005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.176363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.176738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.176753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.179275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.179638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.179682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.180041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.180462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.180826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.181187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.181543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.181906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.182240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.182256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.185125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.185494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.185856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.185900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.186277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.186638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.186996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.187363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.187726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.188135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.188162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.190968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.191006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.191378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.191393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.193525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.193565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.193602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.193639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.194617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.196671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.196712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.196750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.196788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.197809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.199938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.199980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.200995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.201012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.467 [2024-07-26 10:48:47.203873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.203912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.203951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.204298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.204313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.206440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.206480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.206518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.206556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.206957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.207466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.209545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.209586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.209623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.209661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.210573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.212752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.212793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.212834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.212883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.213826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.216761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.217166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.217181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.219985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.220024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.220436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.220453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.222572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.222628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.222678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.222717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.223680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.225839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.225884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.225936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.225973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.226929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.229776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.230183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.468 [2024-07-26 10:48:47.230199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.232983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.233421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.233437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.235507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.235548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.235587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.235624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.236600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.238990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.239348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.239364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.241478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.241519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.241556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.241594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.241995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.242577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.244742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.245071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.245086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.246990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.247028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.247066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.247105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.247507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.247526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.249853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.250083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.250097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.251979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.252023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.252063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.252401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.252417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.254492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.254532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.469 [2024-07-26 10:48:47.254569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.254605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.254891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.254938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.254976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.255013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.255050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.255285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.255299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.256713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.256753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.256793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.256830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.257471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.259607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.259648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.259689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.259727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.260510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.261935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.261974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.262732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.264752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.264811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.264851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.264888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.265802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.267994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.268008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.269589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.269630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.269671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.269709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.470 [2024-07-26 10:48:47.270705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.270721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.272966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.274988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.275027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.275066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.275458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.275473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.277405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.277445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.279778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.281134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.281179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.281219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.281577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.281991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.282615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.285693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.286473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.287785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.289328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.289561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.291190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.291555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.291913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.292276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.292679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.292695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.295628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.296880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.298204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.299770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.300003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.301103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.301474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.301830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.302189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.302563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.302578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.304730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.306054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.307610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.309173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.309409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.309779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.310142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.310501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.310860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.311091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.311106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.313841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.315162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.316726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.318289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.318721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.319099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.319460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.319816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.320868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.321142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.321157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.323936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.325483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.471 [2024-07-26 10:48:47.327036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.327979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.328399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.328767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.329125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.329488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.331017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.331254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.331269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.334040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.335604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.337174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.337539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.337949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.338315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.338672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.339821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.341154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.341387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.341402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.344390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.345959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.346740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.347105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.347528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.347909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.348357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.349745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.351300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.351535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.351549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.354608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.356315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.356675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.357031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.357415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.357777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.359172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.360483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.362040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.362278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.472 [2024-07-26 10:48:47.362293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.365330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.365908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.366271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.366628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.367019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.367631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.368950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.370514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.372071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.372307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.372322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.375295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.375656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.376013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.376373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.376784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.378281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.379629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.381197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.382758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.383105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.383120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.384921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.385290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.385649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.386008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.386289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.387608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.389177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.390727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.391658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.391890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.391905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.393671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.394035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.394396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.395103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.395340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.396943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.398629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.400346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.401442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.401711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.401726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.403609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.403973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.404336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.405862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.406129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.733 [2024-07-26 10:48:47.407712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.409280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.410069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.411560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.411796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.411812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.413834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.414199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.415007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.416333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.416567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.418216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.419746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.420975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.422293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.422528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.422543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.424729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.425093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.426784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.428330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.428565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.430144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.430936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.432260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.433819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.434053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.434067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.436258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.437231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.438546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.440090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.440329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.441755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.443126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.444450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.445999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.446239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.446254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.448729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.450264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.451936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.453507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.453741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.454551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.455887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.457445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.459014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.459282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.459298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.462760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.464009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.465567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.467116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.467490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.469202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.470820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.472555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.474187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.474577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.474592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.478013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.479582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.481148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.482176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.482411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.483746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.485310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.486871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.487469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.487867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.487883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.491202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.492770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.494427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.495417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.495685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.497249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.498797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.500185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.500548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.500951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.500966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.504389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.734 [2024-07-26 10:48:47.505957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.506783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.508282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.508517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.510086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.511655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.512051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.512419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.512790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.512805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.516152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.517654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.518956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.520273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.520508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.522090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.523130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.523504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.523868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.524304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.524319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.526658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.528117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.529725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.531354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.531676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.532039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.532401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.532759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.534174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.534462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.534476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.537240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.538848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.540404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.541356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.541708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.542075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.542448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.542808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.543176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.543528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.543542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.546153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.546524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.546882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.547246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.547652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.548017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.548384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.548745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.549100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.549551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.549567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.551976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.552357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.552720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.553084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.553436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.553801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.554162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.554519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.554877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.555195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.555211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.557754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.558121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.558485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.558845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.559294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.559656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.560018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.560396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.560755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.561144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.561159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.563616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.563979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.564341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.564700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.565008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.565381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.565741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.566099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.566465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.566780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.735 [2024-07-26 10:48:47.566795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.569368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.569736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.570099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.570463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.570812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.571184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.571546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.571911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.572273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.572675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.572691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.575054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.575425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.575784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.576146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.576524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.576891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.577256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.577612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.577968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.578338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.578354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.580833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.581203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.581253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.581612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.582017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.582383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.582740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.583095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.583465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.583877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.583893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.586501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.586865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.587244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.587287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.587690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.588052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.588417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.588789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.589164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.589576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.589593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.591707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.591749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.591787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.591825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.592847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.595847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.596191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.596206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.598978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.599028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.599068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.599521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.599536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.601627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.601670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.601708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.601746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.736 [2024-07-26 10:48:47.602105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.602687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.604826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.604867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.604904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.604942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.605913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.608915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.609262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.609278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.611995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.612046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.612084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.612136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.612494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.612509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.614701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.614743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.614785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.614823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.615767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.618780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.619105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.619120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.621874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.622156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.622171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.624966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.625348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.737 [2024-07-26 10:48:47.625363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.627570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.627611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.627653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.627690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.627970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.628414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.629906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.629949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.629992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.630691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.632557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.632598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.632635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.738 [2024-07-26 10:48:47.632672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.633488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.634949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.634990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.635724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.637966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.638004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.638042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.638081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.638488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.638505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.639917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.639958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.639994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.640817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.642932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.643289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.643304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.645876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.647221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.647262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.647310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.647353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:34.999 [2024-07-26 10:48:47.647681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.647734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.647772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.647810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.647850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.648254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.648270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.650889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.652885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.653256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.653271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.655980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.656017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.656252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.656266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.657733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.657773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.657835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.657875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.658522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.660716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.660758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.660796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.660834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.661493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.662946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.662987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.663711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.665737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.665779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.665817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.665856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.666670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.000 [2024-07-26 10:48:47.668611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.668648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.668881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.668896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.670728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.670770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.670808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.670846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.671822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.673825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.674123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.674137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.675729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.675771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.676791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.677192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.677208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.678611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.678651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.678697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.679780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.680473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.682664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.683027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.684695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.686215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.686448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.688021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.688823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.690173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.691738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.691970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.691985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.694326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.695290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.696457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.697998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.698235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.699799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.701042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.702360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.703913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.704150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.704165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.706550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.708235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.709851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.711549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.711785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.712582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.713898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.715464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.717025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.717262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.717278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.720353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.721672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.723231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.724788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.725059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.726463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.727783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.729341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.730291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.730523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.001 [2024-07-26 10:48:47.730538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.733370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.734690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.736251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.737807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.738041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.739119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.740439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.742007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.743563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.743860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.743875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.747532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.748890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.750439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.751983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.752335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.753913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.755592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.757161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.758654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.758973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.758988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.762349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.763908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.765470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.766673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.766911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.768234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.769798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.771358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.772103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.772518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.772536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.775908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.777480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.779126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.780123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.780409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.781984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.782730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.784299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.784667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.785081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.785097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.788634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.790183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.791583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.792964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.793277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.794847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.796410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.797395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.797770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.798176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.798192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.801639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.803197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.804000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.805320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.805553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.807211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.808930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.809298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.809656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.810010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.810025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.813253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.814491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.816036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.817415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.817651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.819224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.820046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.820413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.820773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.821204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.821220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.824296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.825174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.826485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.828054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.828291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.829790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.830154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.830514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.830876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.831284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.831300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.833648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.835275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.836976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.002 [2024-07-26 10:48:47.838639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.838873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.839366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.839726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.840085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.840450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.840741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.840756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.843275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.844593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.846135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.847678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.847997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.848373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.848733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.849095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.849636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.849869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.849884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.852739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.854309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.855879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.857133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.857490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.857857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.858223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.858582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.860160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.860442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.860457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.863248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.864824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.866429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.866796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.867203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.867570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.867931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.868937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.870259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.870495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.870511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.873493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.875059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.875899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.876272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.876672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.877035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.877410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.878871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.880483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.880717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.880732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.883898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.885455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.885819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.886183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.886544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.886910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.888222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.889546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.891098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.891337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.891355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.894366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.894984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.895351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.895712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.896126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.896780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.003 [2024-07-26 10:48:47.898097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.899658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.901228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.901462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.901479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.904214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.904585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.904943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.905321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.263 [2024-07-26 10:48:47.905723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.906944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.908504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.910055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.910495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.910731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.910746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.912915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.913285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.914283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.915588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.915822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.917340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.918739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.919790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.921062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.921303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.921318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.923441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.923809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.924174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.924540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.924962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.925331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.925700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.926061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.926435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.926801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.926816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.929437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.929808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.930177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.930537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.930927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.931300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.931663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.932023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.932387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.932773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.932789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.935227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.935594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.935955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.936323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.936713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.937100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.937468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.937826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.938196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.938569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.938584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.941203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.941572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.941935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.942298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.942668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.943033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.943399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.943762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.944123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.944535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.944551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.947050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.947419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.947782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.948148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.948501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.948867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.949230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.949588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.949946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.950337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.950352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.952813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.953184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.953552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.953914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.954328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.954693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.955050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.955419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.955786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.956180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.956196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.958728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.959111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.959480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.959838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.960237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.960604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.961067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.962463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.264 [2024-07-26 10:48:47.962827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.963193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.963207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.965696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.967005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.967371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.968334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.968574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.968941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.969310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.969672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.970854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.971148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.971163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.973499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.973865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.975130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.975726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.976112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.977217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.977972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.978333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.978691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.979032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.979048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.981621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.981985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.982028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.982415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.982839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.984294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.984678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.985036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.986653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.987115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.987130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.989384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.989827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.991248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.991294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.991705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.992068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.992442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.993275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.994264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.994665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.994681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.996621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.996675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.996713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.996750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.997480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.999450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.999492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.999529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.999567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:47.999963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.000525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.002983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.003025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.003063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.003101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.003502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.003517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.265 [2024-07-26 10:48:48.005352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.005985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.006028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.006264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.006279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.008993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.009031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.009069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.009455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.009471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.011970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.012008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.012244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.012259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.014841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.015194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.015210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.017737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.018127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.018150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.019778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.019824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.019862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.019904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.020743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.022819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.022861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.022912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.022965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.023917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.026877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.027324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.027350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.029344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.029389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.029430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.029466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.266 [2024-07-26 10:48:48.029697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.029746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.029783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.029821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.029859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.030089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.030103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.031658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.031699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.031737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.031774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.032545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.034723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.034764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.034801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.034838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.035526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.037812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.040813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.045791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.046159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.046175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.050961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.055971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.056010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.056356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.056371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.060692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.060737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.060783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.060819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.061050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.061099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.061137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.061180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.267 [2024-07-26 10:48:48.061218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.061448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.061463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.064917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.065154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.065169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.069987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.070026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.070363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.070379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.073831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.073878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.073919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.073956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.074608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.078532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.078578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.078923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.078962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.079000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.079279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.083885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.083933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.083970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.084727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.087908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.087955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.087991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.088732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.093704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.093751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.093790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.093828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.094206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.094247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.094285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.094324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.094724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.268 [2024-07-26 10:48:48.099507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.099568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.099606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.099834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.153414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.153481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.153850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.153900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.154237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.154284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.154617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.155011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.155034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.269 [2024-07-26 10:48:48.155048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.163966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.164339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.164699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.165062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.165077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.167264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.168567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.170124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.171692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.172300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.172660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.173015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.173377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.173609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.173623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.176675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.178423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.180041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.181561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.182277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.182636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.182991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.184469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.184742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.184756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.187529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.189071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.190631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.191038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.191814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.192176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.193126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.194439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.194671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.194687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.197696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.199253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.200173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.200535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.201273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.201712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.203109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.204674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.204907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.204922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.208018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.209387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.209746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.210099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.210858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.212474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.213930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.215477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.215711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.215726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.218834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.219211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.219570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.219928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.221406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.222716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.224265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.225814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.226131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.226150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.228396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.228759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.229117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.229477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.231090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.529 [2024-07-26 10:48:48.232646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.234188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.235467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.235731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.235746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.237542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.237908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.238288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.238896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.240555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.242153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.243843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.244888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.245166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.245182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.247131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.247499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.247857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.249469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.251247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.252818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.253609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.254915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.255163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.255180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.257326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.257689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.259107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.260407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.262213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.263210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.264877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.266511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.266746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.266761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.269119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.270086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.271416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.272849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.274663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.275890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.277211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.278753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.278987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.279001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.281427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.282996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.284688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.286264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.287277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.288583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.290145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.291708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.291943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.291959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.295448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.296755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.298205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.298998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.300532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.301975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.302600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.304033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.304455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.304471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.308277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.309876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.311570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.313266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.314896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.316449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.318009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.319312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.319706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.319722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.323042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.324619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.326178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.326952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.328481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.330069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.331614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.332376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.332750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.332765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.335235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.335600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.335957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.336320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.337034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.337407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.337764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.530 [2024-07-26 10:48:48.338121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.338526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.338542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.340892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.341258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.341623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.341986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.342763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.343121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.343487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.343851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.344233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.344248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.346707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.347070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.347432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.347788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.348540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.348904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.349273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.349631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.350047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.350062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.352508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.352867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.353232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.353597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.354372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.354730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.355085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.355451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.355781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.355796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.358329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.358699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.359072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.359435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.360184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.360547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.360912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.361279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.361625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.361640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.364112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.364476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.364834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.365198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.365859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.366224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.366593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.366955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.367331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.367347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.369823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.370195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.370561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.370918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.371646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.372005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.372371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.372730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.373124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.373146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.375509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.375868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.376230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.376590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.377281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.377640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.377995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.378358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.378711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.378726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.381167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.381534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.381898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.381942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.382708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.383066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.383432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.383806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.384153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.384169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.386743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.387112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.387479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.387835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.388582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.388945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.389310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.389666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.390078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.531 [2024-07-26 10:48:48.390096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.392217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.392576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.392616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.392972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.393708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.393767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.394131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.394176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.394563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.394579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.397492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.397858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.397899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.398264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.399887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.402070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.402435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.402475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.402831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.404158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.404203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.404914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.404965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.405204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.405219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.407376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.407741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.407801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.408170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.408935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.408979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.409341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.409381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.409781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.409799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.411954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.412328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.412370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.413049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.414778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.414826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.416489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.416533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.416763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.416782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.418285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.419770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.419813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.419849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.420546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.420589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.420629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.421004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.421427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.421443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.422814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.422855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.422893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.422930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.423703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.425893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.426293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.426313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.427978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.532 [2024-07-26 10:48:48.428843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.794 [2024-07-26 10:48:48.430210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.794 [2024-07-26 10:48:48.430258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.794 [2024-07-26 10:48:48.430296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.794 [2024-07-26 10:48:48.430335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.794 [2024-07-26 10:48:48.430750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.430789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.430829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.430868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.431271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.431288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.433935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.435977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.436352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.436368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.438945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.439182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.439197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.440677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.440725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.440766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.440803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.441459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.443670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.443711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.443751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.443796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.444443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.445897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.445944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.445982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.446658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.448646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.448689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.448729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.448767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.449571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.451820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.453585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.453627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.453666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.453708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.454134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.795 [2024-07-26 10:48:48.454180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.454219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.454258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.454647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.454663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.456952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.458458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.458498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.458540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.458582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.458999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.459039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.459078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.459116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.459502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.459519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.461997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.462012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.463986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.464025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.464064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.464462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.464479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.466834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.467063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.467078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.468589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.468638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.468681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.468719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.468977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.469024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.469063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.469103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.469491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.469507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.471631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.471671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.471709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.471756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.472397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.473867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.473908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.473968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.474274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.474333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.474374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.474605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.504405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.509120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.515990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.516049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.516098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.516439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.518708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.520276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.520321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.520353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.796 [2024-07-26 10:48:48.520623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.520673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.521570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.521613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.521661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.522951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.523192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.523207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.525214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.525258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.525614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.525654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.527050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.527367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.527418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.528981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.530538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.530580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.530882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.530898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.533171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.533539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.533895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.534256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.534604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.535939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.537507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.539061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.540368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.540655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.540669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.542346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.542707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.543062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.543423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.543657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.544974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.546522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.548078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.548868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.549101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.549117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.550887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.551251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.551607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.552730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.553013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.554592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.556099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.557359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.558877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.559145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.559160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.561056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.561422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.561780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.563306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.563542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.565096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.566658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.567456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.568787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.569021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.569036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.571066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.571435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.572674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.573986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.574225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.575800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.576917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.578586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.580119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.580356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.580371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.582643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.583256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.584546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.586100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.586338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.588018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.589031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.590336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.591879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.592112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.592127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.594436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.595914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.597235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.598790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.599024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.599925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.797 [2024-07-26 10:48:48.601487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.603202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.604819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.605052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.605067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.608101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.609424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.610985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.612545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.612810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.614093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.615407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.616970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.618527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.618867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.618882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.622927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.624637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.626289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.627828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.628144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.629459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.631016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.632584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.633679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.634062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.634078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.637355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.638900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.640460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.641261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.641495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.642976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.644553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.646197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.646563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.646968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.646984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.650403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.651960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.653267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.654725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.655001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.656541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.658101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.658998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.659361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.659762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.659779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.663233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.663928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.665662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.667311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.667545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.668053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.668414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.668771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.669126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.669395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.669410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.671821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.673153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.674720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.676288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.676625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.676994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.677354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.677710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.678529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.678783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.678798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.681936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.683564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.685290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.686872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.687234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.687602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.687961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.688322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.688684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.688986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.689001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.691623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.691987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.692352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.692708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.693122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.693490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.693857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:35.798 [2024-07-26 10:48:48.694225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.694582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.694949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.694964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.697377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.697737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.698093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.698460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.698774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.699148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.699505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.699862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.700226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.700589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.700604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.703108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.703485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.703843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.704203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.704598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.704960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.705328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.705692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.706053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.706433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.706448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.708867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.709231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.709589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.709946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.710272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.062 [2024-07-26 10:48:48.710640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.710997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.711359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.711719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.712029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.712044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.714542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.714908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.715277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.715636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.715989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.716354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.716715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.717076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.717438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.717836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.717852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.720294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.720657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.721015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.721380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.721722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.722087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.722450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.722806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.723166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.723513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.723527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.725933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.726303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.726665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.727037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.727438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.727799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.728162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.728525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.728907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.729325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.729341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.731821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.732187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.732544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.732900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.733236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.733607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.733968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.734326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.734696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.735098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.735117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.737705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.738068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.738439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.738804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.739209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.739572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.739929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.740290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.740652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.741044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.741059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.743492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.743540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.743897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.744257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.744693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.745058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.745425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.745789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.746151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.746489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.746505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.748963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.749327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.749685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.749727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.750054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.750426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.750785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.750833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.751191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.751655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.751671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.754037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.754404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.754448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.755008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.755248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.755616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.755659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.756956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.757320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.063 [2024-07-26 10:48:48.757718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.757733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.760234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.760288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.760643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.760682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.761105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.761153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.761513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.761878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.761922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.762304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.762320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.764808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.764856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.766147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.766189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.766546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.766919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.766959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.768571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.768621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.769096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.769111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.772192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.772238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.773023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.774344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.774577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.776178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.777886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.777936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.778298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.778692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.778708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.780595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.782152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.782194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.783178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.783423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.784636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.784680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.785981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.786437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.786669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.786684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.788312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.788675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.790275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.790322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.790736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.790780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.791439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.792727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.792770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.793001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.793017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.796067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.796112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.797663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.797704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.797941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.798313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.798355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.798396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.798750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.799172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.799189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.800786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.800827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.800864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.800902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.801687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.803990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.804006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.805968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.806010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.064 [2024-07-26 10:48:48.806053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.806750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.808772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.809007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.809022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.811985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.813968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.814204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.814219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.815718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.815760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.815799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.815837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.816667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.818654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.818695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.818740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.818783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.819430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.820863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.820903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.820944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.820981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.821896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.823874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.823922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.823964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.824662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.065 [2024-07-26 10:48:48.826573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.826611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.826648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.827016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.827031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.828771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.828812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.828854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.828899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.829855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.831766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.832038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.832052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.833621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.833663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.833702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.833741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.834745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.836765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.837019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.837036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.838947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.839371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.839389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.841975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.842210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.842225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.843679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.843719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.843757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.843795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.844441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.846553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.846595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.846633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.846673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.066 [2024-07-26 10:48:48.847476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.848903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.848944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.848982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.849720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.851976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.852210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.852226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.854812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.856831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.857243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.857259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.859188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.860796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.860847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.860892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.861548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.862960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.863001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.863038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.864748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.865128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.865147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.866896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.866944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.868675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.868719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.868950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.868990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.870722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.067 [2024-07-26 10:48:48.870772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.870812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.871123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.871137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.872533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.872872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.872936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.873275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.873690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.874051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.874097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.874164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.875498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.875732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.875746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.877207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.878954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.879000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.880555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.880789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.880846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.881841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.881883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.882567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.882980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.882996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.884782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.886508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.888062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.888105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.888471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.888486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.893523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.893572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.893928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.893968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.894214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.894260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.895563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.895605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.895643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.895874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.895888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.898839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.898887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.898926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.900488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.900825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.902417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.902460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.902499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.902854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.903173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.903189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.907275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.908623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.908667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.910301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.910609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.910660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.912229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.913790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.913832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.914170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.914186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.917985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.919632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.921215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.922714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.923018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.924340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.925890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.927437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.928548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.928782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.928797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.933222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.934782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.936336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.937135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.937372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.938775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.940328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.941913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.942282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.942678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.942693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.068 [2024-07-26 10:48:48.946171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.947739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.949153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.950514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.950822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.952403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.953959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.954940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.956642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.957064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.069 [2024-07-26 10:48:48.957080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.960856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.962419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.963212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.964528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.964762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.966460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.968047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.968409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.968767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.969134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.969153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.972305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.973437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.975099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.976606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.976838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.978416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.979142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.980670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.981031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.981398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.981413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.987480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.988395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.989707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.991268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.991500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.992993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.993356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.993712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.994081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.994486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.994502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.996848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:48.998521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.000155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.001843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.002077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.002789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.004179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.004537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.005461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.005704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.005719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.010048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.011366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.012919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.014477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.014758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.015127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.015488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.015845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.016500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.016733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.016753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.019691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.021363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.022921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.331 [2024-07-26 10:48:49.024400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.024693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.025906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.026270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.027399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.028111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.028516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.028532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.033874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.034892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.035271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.035628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.036061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.036428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.037906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.039540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.041101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.041339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.041354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.044420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.045283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.046274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.046632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.046907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.047991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.048352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.049701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.051013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.051249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.051264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.056528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.056894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.057254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.057612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.057943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.059277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.060865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.062421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.063682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.064008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.064023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.065755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.067327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.067686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.068485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.068720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.069084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.069449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.069810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.070177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.070572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.070588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.073481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.075212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.075570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.075930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.076294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.076666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.077025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.077384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.077740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.078085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.078100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.080634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.081758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.082115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.082479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.082788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.083160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.083516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.083873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.084238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.084559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.084574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.087318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.087690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.088056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.088418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.088763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.089122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.089488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.089850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.091049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.091375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.091390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.093653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.094026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.094389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.094749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.095142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.095504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.332 [2024-07-26 10:48:49.095865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.096233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.097795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.098220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.098237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.101131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.101497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.101854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.102217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.102550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.103359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.104394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.104753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.106025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.106352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.106367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.108729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.109093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.109454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.109819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.110154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.111649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.112014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.112389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.113867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.114297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.114313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.117424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.117801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.118693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.119650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.120031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.120816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.121880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.122243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.122601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.122944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.122959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.125442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.125812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.127336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.127701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.128097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.129522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.129957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.130321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.130686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.131073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.131088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.135446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.135811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.136575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.137660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.138068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.138439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.138803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.139165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.139521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.139901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.139916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.143070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.143436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.144600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.145287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.145691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.146055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.146422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.146781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.147135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.147575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.147591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.153203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.153257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.153622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.153983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.154355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.154722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.155080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.155439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.155796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.156129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.156148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.158602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.159835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.160197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.160239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.160546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.160913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.161283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.161326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.161681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.162105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.333 [2024-07-26 10:48:49.162123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.167623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.168001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.168046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.168409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.168641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.169072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.169118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.170593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.170954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.171352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.171368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.174073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.174121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.175065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.175105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.175501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.175545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.176673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.177403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.177443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.177842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.177858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.181308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.181360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.181724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.181769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.182221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.182590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.182634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.184008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.184049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.184458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.184474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.188001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.188068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.189625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.191187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.191421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.192296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.193616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.193658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.195216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.195449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.195464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.198543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.198909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.198952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.200527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.200762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.202326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.202369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.203933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.204814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.205087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.205102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.206541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.207828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.208194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.208235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.208521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.208567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.209587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.209943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.209983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.210230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.210245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.215519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.215571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.217133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.217178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.217517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.219256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.219299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.219337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.219692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.220032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.220047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.221822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.221868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.221906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.221943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.222590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.226796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.226841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.226897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.334 [2024-07-26 10:48:49.226938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.227762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.229785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.229829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.229867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.229917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.335 [2024-07-26 10:48:49.230606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.234850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.234896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.234934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.234971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.235694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.237745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.237787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.237830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.237869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.238712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.242993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.243030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.243068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.243426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.243442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.245921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.246302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.246317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.597 [2024-07-26 10:48:49.250796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.250841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.250879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.250916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.251557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.253873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.254306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.254323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.257962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.258789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.260978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.261212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.261227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.264649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.264695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.264735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.264781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.265629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.267765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.268171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.268188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.273957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.274261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.274277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.275633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.275675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.275714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.275753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.275993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.276620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.278900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.278947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.278993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.598 [2024-07-26 10:48:49.279036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.279691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.281966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.284971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.285008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.285242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.285258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.286737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.286779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.286816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.286853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.287542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.292988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.293026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.293070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.293108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.293343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.293359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.294879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.294922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.294963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.295646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.298669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.298715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.298754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.298793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.299606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.301835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.305936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.306734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.306781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.306819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.307214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.307257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.307296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.599 [2024-07-26 10:48:49.307339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.307376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.307608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.307623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.309066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.309106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.309148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.310451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.310684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.310733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.310776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.312321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.312364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.312593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.312611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.316859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.316905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.318409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.318450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.318726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.318777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.320315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.320358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.320395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.320627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.320646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.322113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.323669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.323711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.324562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.324800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.325172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.325213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.325255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.326410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.326708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.326723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.330094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.331626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.331673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.333234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.333470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.333522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.335086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.335128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.335917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.336156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.336172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.338502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.340468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.341537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.341592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.341824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.341839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.345809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.345861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.347324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.347366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.347777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.347820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.348602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.348643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.348681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.348972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.348987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.354701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.354751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.354789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.355580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.355814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.356363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.356405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.356445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.356992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.357228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.357243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.360787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.361595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.361639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.362949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.363187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.363243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.364798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.365974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.366014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.366257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.600 [2024-07-26 10:48:49.366272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.370570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.372133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.373696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.374487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.374720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.376183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.377742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.379386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.380222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.380461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.380475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.384303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.385868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.387214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.388655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.388931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.390506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.392063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.393001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.394737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.395167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.395183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.398994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.400560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.401350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.402680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.402913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.404663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.406258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.407326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.408110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.408520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.408536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.414250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.415353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.417041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.418552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.418785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.420361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.421070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.422610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.422969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.423336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.423352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.429445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.430324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.431644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.433198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.433432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.434922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.436089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.436775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.437136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.437377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.437392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.441766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.443425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.445087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.446811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.447045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.447712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.449080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.449441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.450412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.450668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.450683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.455086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.456411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.457972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.459533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.459921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.461352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.461770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.462128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.463664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.464093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.464109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.469125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.470687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.472232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.473613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.473907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.475059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.475421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.476605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.477277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.477669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.477686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.601 [2024-07-26 10:48:49.482713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.483591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.485269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.485624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.486017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.487678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.488045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.488699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.490019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.490256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.490272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.602 [2024-07-26 10:48:49.496153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.497259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.498001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.498362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.498603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.499482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.499840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.501384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.502765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.502998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.503014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.508253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.509788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.510151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.510950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.511191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.511558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.512395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.513699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.515287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.515520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.515535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.520442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.521679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.522038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.523127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.523429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.523796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.524162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.524525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.526185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.865 [2024-07-26 10:48:49.526623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.526640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.530803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.531187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.531746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.533043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.533452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.533818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.534192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.534994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.536030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.536423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.536440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.540796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.541169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.542311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.543019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.543426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.543792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.544159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.545609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.546008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.546412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.546429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.550819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.551189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.552758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.553119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.553521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.553884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.554253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.555958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.556321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.556712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.556728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.561192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.561561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.563122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.563486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.563854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.564229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.564812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.566075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.566437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.566767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.566782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.570554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.571528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.572399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.572758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.573078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.573453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.574806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.575313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.575674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.575906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.575921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.578847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.580509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.580876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.581242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.581574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.581953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.583428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.583791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.584638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.584875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.584890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.588798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.589863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.590228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.590587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.590938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.591975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.592783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.593149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.594562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.594931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.594950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.600012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.600592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.600956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.601325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.601670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.603206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.603572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.603932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.866 [2024-07-26 10:48:49.605483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.605925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.605941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.611612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.611985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.612353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.612719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.613067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.614530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.614888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.615663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.616719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.617123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.617145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.621894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.622267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.622630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.622994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.623253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.624325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.624682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.625804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.626518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.626925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.626942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.630972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.631344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.631707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.632068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.632320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.633159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.633519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.634907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.635366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.635771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.635787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.639170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.639223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.639580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.639943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.640294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.641818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.642192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.642553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.644099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.644544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.644560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.650274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.650640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.651003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.651055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.651433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.652023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.653295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.653340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.653693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.653985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.654000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.659176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.660066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.660111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.660473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.660712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.661597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.661642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.661998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.662373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.662703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.662718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.668621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.668680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.669071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.669114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.669355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.669406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.669765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.670448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.670490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.670727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.670742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.674782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.674834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.675667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.675713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.676115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.676483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.867 [2024-07-26 10:48:49.676528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.676886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.676933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.677170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.677186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.680789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.680842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.682387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.683997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.684374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.685825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.687420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.687465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.689026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.689266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.689281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.693508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.694883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.694927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.696307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.696549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.698128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.698178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.698965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.700279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.700515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.700529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.705358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.706467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.706826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.706869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.707125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.707178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.708484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.710033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.710075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.710311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.710327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.715901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.715959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.716324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.716365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.716700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.718851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.722767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.722812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.722850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.722886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.723543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.726846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.726892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.726930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.726968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.727686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.732912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.733211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.733226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.868 [2024-07-26 10:48:49.737528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.737566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.737608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.737646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.737877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.737892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.741951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.741998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.742787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.745849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.746078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.746093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.750697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.750744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.750782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.750820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.751681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.754943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.759992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.760031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.760070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.760426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:36.869 [2024-07-26 10:48:49.760441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.764680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.764729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.764770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.764807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.765464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.769774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.770045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.770060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.773452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.773498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.131 [2024-07-26 10:48:49.773536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.773590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.773823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.773868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.773907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.773951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.774000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.774237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.774256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.779695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.780090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.780106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.784857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.784904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.784941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.784980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.785679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.789989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.790228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.790243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.793736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.793781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.793824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.793865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.794517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.798974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.799758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.804848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.805081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.805096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.807976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.808015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.808053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.808090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.808325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.808341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.812613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.812675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.812713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.132 [2024-07-26 10:48:49.812752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.813709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.817902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.821779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.821824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.821864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.821902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.822562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.826904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.826954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.826991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.827848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.831657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.833822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.834098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.834113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.837930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.837976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.838015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.838926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.839237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.839290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.839328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.840884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.840927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.841163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.841178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.845399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.845448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.845805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.845845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.846999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.847014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.850856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.852415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.852459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.133 [2024-07-26 10:48:49.854022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.854382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.854751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.854792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.854834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.855196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.855592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.855607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.859796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.861116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.861164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.862688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.862924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.862975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.863714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.863754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.864110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.864476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.864492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.867898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.869522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.871089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.871132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.871371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.871386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.875995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.876048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.877601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.877644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.877876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.877927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.879200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.879242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.879279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.879584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.879598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.883782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.883832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.883870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.885171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.885404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.887168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.887214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.887251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.888518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.888782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.888797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.892995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.893367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.893409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.894048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.894292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.894344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.896060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.897682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.897725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.897955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.897970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.902467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.902833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.903196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.904856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.905091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.906654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.908221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.909011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.910320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.910554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.910568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.915595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.916902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.918466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.920023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.920335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.921886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.134 [2024-07-26 10:48:49.923277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.924834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.926369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.926708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.926724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.931335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.932817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.934146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.935465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.935700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.937264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.938318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.938691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.939045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.939479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.939495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.944754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.946486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.946846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.947207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.947595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.947956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.949272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.950574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.952136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.952376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.952391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.957948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.958326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.959066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.960186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.960591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.961370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.962684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.964237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.965792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.966045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.966061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.971083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.971477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.971907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.973325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.973559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.975128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.976690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.977557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.978851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.979087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.979102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.984404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.985718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.987290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.988841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.989184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.990692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.992040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.993629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.995180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.995585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.995601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.998861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.999232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.999591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:49.999957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.000363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.000730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.001088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.001450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.001814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.002304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.002320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.005516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.005884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.006251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.006614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.006943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.007325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.007688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.008044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.008410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.008767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.008782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.011943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.012315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.012675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.013032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.013423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.135 [2024-07-26 10:48:50.013787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.014158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.014519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.014875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.015282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.015299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.018518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.018890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.019264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.019623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.020038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.020409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.020790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.021179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.021538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.021932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.021948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.025079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.025457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.025823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.026187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.026561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.026924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.027293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.027659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.028020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.028422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.028438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.030807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.136 [2024-07-26 10:48:50.031177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.031539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.031900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.032254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.032632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.032991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.033352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.033712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.034098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.034113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.036586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.036949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.037319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.037686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.038088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.038457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.038818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.039186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.039553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.039964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.039982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.042498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.042861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.043225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.398 [2024-07-26 10:48:50.043583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.043931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.044305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.044666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.045022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.045385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.045781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.045798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.048344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.048710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.049075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.049438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.049845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.050212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.050569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.050928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.051295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.051713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.051730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.054270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.054646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.055003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.055373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.055780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.056156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.056521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.056878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.057243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.057651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.057667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.060153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.060514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.062008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.062382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.062617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.063088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.063452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.063807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.064168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.064518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.064533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.066980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.067348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.067713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.068072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.068483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.068845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.069215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.069577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.069941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.070369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.070386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.072866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.073230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.073587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.073943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.074184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.075467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.077018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.078568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.079339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.079573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.079588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.081373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.081737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.082091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.083204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.083478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.085055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.086607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.087850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.089403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.089679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.089694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.091652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.092029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.092576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.093867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.094103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.095661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.097309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.098315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.099625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.099862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.099876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.101935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.102302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.103906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.105344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.105578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.107157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.399 [2024-07-26 10:48:50.107950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.109392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.110992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.111233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.111249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.113507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.113552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.114704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.116000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.116239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.117813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.119041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.120594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.121995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.122236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.122252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.124574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.125128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.126444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.126487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.126719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.128464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.130083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.130126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.131452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.131724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.131740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.133645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.134009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.134052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.134514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.134747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.136122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.136171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.137811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.139562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.139883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.139898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.141603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.141648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.142005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.142044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.142420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.142465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.142820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.144369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.144425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.144666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.144681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.147615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.147662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.149377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.149421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.149653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.150844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.154076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.154123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.155037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.156589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.156825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.158383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.159933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.159975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.160346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.160759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.160775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.162734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.164291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.164334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.165892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.166196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.167514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.167557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.168892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.400 [2024-07-26 10:48:50.170453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.170686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.170701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.172893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.173588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.174905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.174948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.175184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.175237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.176795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.178101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.178146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.178381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.178395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.180108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.180157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.180512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.180553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.180966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.181335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.181377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.181415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.182718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.182952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.182967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.184991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.185035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.185269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.185284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.187995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.188272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.188287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.189744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.189784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.189821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.189858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.190567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.192514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.192557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.192595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.192633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.193524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.194887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.194928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.194970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.195657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.197970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.401 [2024-07-26 10:48:50.198365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.198382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.199801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.199841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.199878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.199918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.200618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.202785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.203154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.203169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.204875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.204916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.204956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.204995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.205753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.207775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.208178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.208195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.210813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.212843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.213273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.213288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.215874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.216102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.216117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.217614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.217655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.217692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.402 [2024-07-26 10:48:50.217731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.217962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.218464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.220720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.220763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.220800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.220836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.221496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.222960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.223722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.225900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.225942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.225984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.226792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.228789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.229022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.229037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.231950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.233889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.234121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.234136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.235847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.403 [2024-07-26 10:48:50.235889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.235934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.235974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.236924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.238903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.239154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.239169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.240649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.240689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.240726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.240763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.241738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.243381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.244940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.244982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.245879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.247344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.247384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.247421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.247779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.248904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.250366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.250408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.252010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.252056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.252291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.252335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.253883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.253933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.253979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.254212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.254228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.256478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.257675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.257717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.259028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.259265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.260832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.260875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.260915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.261711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.261945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.261960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.263438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.263799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.263838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.264198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.264603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.264648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.265623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.265665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.266973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.267208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.267223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.268683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.270238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.270280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.270317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.404 [2024-07-26 10:48:50.270547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.270597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.270638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.271317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.271358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.271784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.271800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.275405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.275451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.277015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.277055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.277290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.277340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.278621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.278662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.278699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.279000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.279015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.280933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.280996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.281036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.281396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.281762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.283377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.283430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.283470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.285014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.285250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.285265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.286735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.287100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.287145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.287505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.287902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.287947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.288309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.289996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.290036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.290275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.290290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.293089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.294655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.296281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.296648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.297042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.405 [2024-07-26 10:48:50.297414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.297771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.298990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.300299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.300532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.300547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.303483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.305025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.305956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.306319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.306726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.307087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.307563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.308923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.310468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.310702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.310718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.313866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.315500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.315863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.667 [2024-07-26 10:48:50.316224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.316607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.316968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.318424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.319735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.321310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.321552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.321567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.324601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.325535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.325896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.326257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.326675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.327036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.327406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.327772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.328128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.328522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.328538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.330937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.331305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.331665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.332028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.332391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.332779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.333135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.333495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.333857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.334259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.334278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.336800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.337169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.337529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.337884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.338301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.338662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.339026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.339393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.339750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.340108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.340123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.342681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.343041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.343402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.343762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.344103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.344474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.344831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.345191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.345551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.345887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.345902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.348409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.348777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.349155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.349516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.349883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.350247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.350607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.350973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.351336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.351730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.351745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.354270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.354634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.354989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.355354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.355755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.356132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.356495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.356850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.357215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.357531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.357547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.360009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.360376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.360740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.361099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.361503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.361866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.362232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.362611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.362969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.363366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.363383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.365760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.366120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.366480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.366837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.367183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.367559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.668 [2024-07-26 10:48:50.367932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.368294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.368650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.369020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.369035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.371535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.371897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.372265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.372625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.373029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.373395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.373755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.374132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.374500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.374977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.374992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.377496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.377861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.378222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.378577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.378883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.379255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.379615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.379972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.380352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.380740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.380756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.383414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.383779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.384157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.384520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.384929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.385295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.385653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.386011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.386376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.386771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.386786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.389889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.391003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.391721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.392077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.392469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.392832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.393194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.393562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.393921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.394336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.394352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.396746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.397105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.397464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.397824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.398182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.398549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.398913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.399274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.399630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.400029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.400044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.402520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.402887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.403254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.403617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.404026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.404399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.404756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.406490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.408088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.408326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.408342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.411346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.412919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.413289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.413649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.414037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.414405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.415517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.416843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.418410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.418645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.418659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.421662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.422640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.423021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.423387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.423827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.424196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.425751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.427460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.429062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.429300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.669 [2024-07-26 10:48:50.429315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.432383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.432752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.433109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.433468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.433864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.435050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.436367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.437922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.439472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.439847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.439862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.442182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.442566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.442929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.443292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.443680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.445231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.446929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.448538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.450057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.450377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.450392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.452047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.452412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.452770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.453126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.453362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.454679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.456177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.457710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.458493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.458727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.458741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.460502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.460869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.461236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.461979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.462220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.463890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.465586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.467183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.468355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.468653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.468667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.470517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.470561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.470916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.471277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.471511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.472836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.474386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.475936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.476732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.476970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.476986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.478750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.479112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.479474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.479517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.479840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.481165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.482722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.482763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.484317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.484564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.484579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.487320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.487686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.487727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.488082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.488555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.488916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.488958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.490375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.491940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.492179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.492194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.495328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.495380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.496830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.496871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.497225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.497278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.497637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.497996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.498040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.498440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.498456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.500906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.500957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.502547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.502600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.670 [2024-07-26 10:48:50.502835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.504400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.504442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.506163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.506211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.506599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.506614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.510001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.510046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.511587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.513129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.513418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.514874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.516182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.516224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.517765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.518000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.518015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.520108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.520699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.520741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.522050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.522289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.524010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.524053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.525461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.526814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.527109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.527127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.528695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.529065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.529425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.529466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.529872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.529929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.531459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.533125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.533170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.533402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.533416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.536530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.536581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.538908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.539272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.539687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.539703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.541994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.542009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.543459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.543500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.543536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.543574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.543980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.544570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.546703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.547058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.547073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.548999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.549037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.671 [2024-07-26 10:48:50.549076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.549497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.549514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.551834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.552065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.552080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.553969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.554014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.554054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.554403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.554418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.556579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.556619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.556660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.556697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.556992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.557404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.558871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.558911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.558956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.558999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.559644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.561746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.561790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.561833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.561871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.562546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.563970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.564751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.566736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.566778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.566816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.672 [2024-07-26 10:48:50.566855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.567662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.569878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.571646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.571687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.571730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.571769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.572699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.935 [2024-07-26 10:48:50.574922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.574937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.576957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.577000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.577039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.577078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.577480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.577495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.579955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.581941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.582331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.582347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.584944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.586911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.587288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.587303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.589907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.590143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.590158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.591653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.591700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.591746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.591788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.592433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.594633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.594674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.594712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.594750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.594994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.595420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.596843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.596890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.936 [2024-07-26 10:48:50.596928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.596971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.597621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.599571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.599932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.599973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.600664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.602111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.602163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.602204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.603787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.604020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.604065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.604110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.605709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.605751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.606099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.606114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.608280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.608320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.609622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.609664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.609897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.609948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.611079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.611126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.611171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.611405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.611420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.613117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.613483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.613523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.613877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.614216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.615527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.615570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.615610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.617167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.617403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.617417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.618909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.620469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.620511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.621367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.621793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.621847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.622208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.622261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.622617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.622999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.623013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.624365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.625617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.627179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.627221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.627454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.627468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.629847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.937 [2024-07-26 10:48:50.629903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.631479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.631529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.631763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.631805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.633451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.633515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.633555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.633784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.633798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.636951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.636999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.637037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.637399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.637796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.638852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.640366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.641945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.641996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.643556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.643791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.643836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.645586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.645948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.645988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.646394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.646412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.649856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.651427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.652331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.653833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.654098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.655722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.657279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.658032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.658398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.658821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.658837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.661327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.661689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.662044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.662410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.662735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.663102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.663467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.663827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.664191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.664559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.664574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.667094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.667467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.667832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.668196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.668550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.668917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.669283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.669646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.670006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.670404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.670420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.672850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.673221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.673594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.673957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.674368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.674738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.675097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.675458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.675816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.676182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.676197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.678679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.679044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.679413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.679774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.680191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.680553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.680909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.681295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.681663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.938 [2024-07-26 10:48:50.682127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.682148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.684589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.684954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.685313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.685671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.685990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.686364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.686727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.687086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.687450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.687849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.687865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.690380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.690744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.691107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.691474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.691893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.692264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.692623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.692982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.693349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.693733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.693748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.696304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.696673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.697032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.697397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.697798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.698172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.698539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.698900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.699260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.699678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.699694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.702129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.702499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.702859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.703227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.703572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.703936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.704300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.704658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.705022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.705402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.705418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.708167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.708535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.708894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.709254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.709669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.710033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.710412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.710774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.711132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.711479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.711495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.713903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.714268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.714625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.714990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.715378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.715748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.716107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.716473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.716833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.717197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.717213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.719745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.720111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.720480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.720841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.721225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.721592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.722570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.723420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.724781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.725119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.725135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.727875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.728254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.728618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.728977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.729380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.729746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.730111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.730479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.730839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.731221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.731236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.939 [2024-07-26 10:48:50.733605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.733973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.734343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.734706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.735064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.735455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.735818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.736180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.736543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.736951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.736966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.739939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.741249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.742804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.744351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.744600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.745836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.747158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.748718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.750274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.750611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.750626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.755048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.756673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.758386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.760045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.760366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.761689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.763256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.764817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.766025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.766373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.766392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.769787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.771144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.772692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.774251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.774614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.776107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.777729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.779290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.780718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.781060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.781075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.784695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.786337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.787886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.789351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.789637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.790947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.792514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.794075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.795101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.795499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.795514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.798816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.800379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.801944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.802751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.802986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.804446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.806014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.807678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.808046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.808452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.808470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.811991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.813557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.814780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.816338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.816605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.818171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.819727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.820542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.820905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.940 [2024-07-26 10:48:50.821317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.821333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.824776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.826339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.827125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.828447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.828681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.830331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.832052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.832416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.832778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.833132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:37.941 [2024-07-26 10:48:50.833152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.836442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.837637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.839224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.840662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.840897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.842469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.843237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.843600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.843958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.844400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.844416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.847438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.848230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.849548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.851103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.851342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.853070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.853443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.853802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.854165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.854557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.854572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.857188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.857235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.858909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.860538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.860775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.862362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.862909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.863273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.863634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.864063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.864078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.867111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.867982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.217 [2024-07-26 10:48:50.869299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.869345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.869578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.871137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.872440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.872482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.872839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.873231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.873247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.876636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.878189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.878232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.879020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.879257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.880819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.880868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.882488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.884013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.884345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.884361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.887785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.887833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.889394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.889436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.889669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.889719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.890920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.892510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.892559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.892794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.892808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.894858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.894913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.895272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.895313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.895648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.896995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.897039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.898602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.898644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.898875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.898890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.901902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.901949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.903007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.903382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.903782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.904152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.904512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.904563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.906078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.906318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.906334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.907874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.909456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.909500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.911058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.911295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.911663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.911707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.912063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.912426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.912839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.912854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.914233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.915355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.917040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.917090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.917327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.917374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.918913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.920463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.920504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.920874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.920889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.924357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.924405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.925960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.926002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.926237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.927639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.927682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.927719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.929440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.929682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.929696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.931323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.931364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.931402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.931440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.218 [2024-07-26 10:48:50.931823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.931881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.931927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.931965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.932003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.932435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.932453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.933972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.934766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.936932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.937272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.937287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.939921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.219 [2024-07-26 10:48:50.942965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:38:38.786 00:38:38.786 Latency(us) 00:38:38.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.786 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x0 length 0x100 00:38:38.786 crypto_ram : 5.93 43.14 2.70 0.00 0.00 2877279.44 271790.90 2442762.65 00:38:38.786 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x100 length 0x100 00:38:38.786 crypto_ram : 6.00 42.67 2.67 0.00 0.00 2918504.86 249980.52 2563558.60 00:38:38.786 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x0 length 0x100 00:38:38.786 crypto_ram1 : 5.94 43.13 2.70 0.00 0.00 2771425.69 271790.90 2228014.28 00:38:38.786 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x100 length 0x100 00:38:38.786 crypto_ram1 : 6.00 42.66 2.67 0.00 0.00 2811930.21 249980.52 2348810.24 00:38:38.786 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x0 length 0x100 00:38:38.786 crypto_ram2 : 5.58 287.93 18.00 0.00 0.00 396948.23 39845.89 637534.21 00:38:38.786 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x100 length 0x100 00:38:38.786 crypto_ram2 : 5.64 272.12 17.01 0.00 0.00 419971.29 89338.68 657666.87 00:38:38.786 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x0 length 0x100 00:38:38.786 crypto_ram3 : 5.71 298.51 18.66 0.00 0.00 371534.72 18350.08 473117.49 00:38:38.786 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:38.786 Verification LBA range: start 0x100 length 0x100 00:38:38.786 crypto_ram3 : 5.76 284.56 17.79 0.00 0.00 389853.14 49702.50 353999.26 00:38:38.786 =================================================================================================================== 00:38:38.786 Total : 1314.72 82.17 0.00 0.00 728105.60 18350.08 2563558.60 00:38:39.045 00:38:39.045 real 0m9.141s 00:38:39.045 user 0m17.316s 00:38:39.045 sys 0m0.552s 00:38:39.045 10:48:51 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:39.045 10:48:51 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:38:39.045 ************************************ 00:38:39.045 END TEST bdev_verify_big_io 00:38:39.045 ************************************ 00:38:39.045 10:48:51 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:39.045 10:48:51 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:39.045 10:48:51 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:39.045 10:48:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:39.045 ************************************ 00:38:39.045 START TEST bdev_write_zeroes 00:38:39.045 ************************************ 00:38:39.045 10:48:51 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:39.304 [2024-07-26 10:48:51.960242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:39.304 [2024-07-26 10:48:51.960296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623868 ] 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:39.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.304 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:39.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.305 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:39.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:39.305 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:39.305 [2024-07-26 10:48:52.094402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.305 [2024-07-26 10:48:52.138104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.305 [2024-07-26 10:48:52.159346] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:38:39.305 [2024-07-26 10:48:52.167373] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:38:39.305 [2024-07-26 10:48:52.175393] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:38:39.564 [2024-07-26 10:48:52.279685] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:38:42.098 [2024-07-26 10:48:54.600289] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:38:42.098 [2024-07-26 10:48:54.600351] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:38:42.099 [2024-07-26 10:48:54.600365] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:42.099 [2024-07-26 10:48:54.608313] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:38:42.099 [2024-07-26 10:48:54.608334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:38:42.099 [2024-07-26 10:48:54.608346] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:42.099 [2024-07-26 10:48:54.616330] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:38:42.099 [2024-07-26 10:48:54.616347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:38:42.099 [2024-07-26 10:48:54.616358] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:42.099 [2024-07-26 10:48:54.624350] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:38:42.099 [2024-07-26 10:48:54.624366] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:38:42.099 [2024-07-26 10:48:54.624377] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:38:42.099 Running I/O for 1 seconds... 00:38:43.036 00:38:43.036 Latency(us) 00:38:43.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.036 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:43.036 crypto_ram : 1.02 2223.41 8.69 0.00 0.00 57180.10 5006.95 69206.02 00:38:43.036 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:43.036 crypto_ram1 : 1.02 2228.85 8.71 0.00 0.00 56743.93 5006.95 64172.85 00:38:43.036 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:43.036 crypto_ram2 : 1.02 17172.68 67.08 0.00 0.00 7354.03 2215.12 9699.33 00:38:43.036 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:43.036 crypto_ram3 : 1.02 17150.53 66.99 0.00 0.00 7332.02 2215.12 7707.03 00:38:43.036 =================================================================================================================== 00:38:43.036 Total : 38775.47 151.47 0.00 0.00 13061.68 2215.12 69206.02 00:38:43.294 00:38:43.294 real 0m4.123s 00:38:43.294 user 0m3.624s 00:38:43.294 sys 0m0.459s 00:38:43.294 10:48:56 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:43.295 10:48:56 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:38:43.295 ************************************ 00:38:43.295 END TEST bdev_write_zeroes 00:38:43.295 ************************************ 00:38:43.295 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:43.295 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:43.295 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:43.295 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:43.295 ************************************ 00:38:43.295 START TEST bdev_json_nonenclosed 00:38:43.295 ************************************ 00:38:43.295 10:48:56 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:43.553 [2024-07-26 10:48:56.208856] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:43.553 [2024-07-26 10:48:56.208982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3624425 ] 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:43.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.553 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:43.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:43.554 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:43.554 [2024-07-26 10:48:56.420735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.812 [2024-07-26 10:48:56.464978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.812 [2024-07-26 10:48:56.465040] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:38:43.812 [2024-07-26 10:48:56.465056] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:43.812 [2024-07-26 10:48:56.465066] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:43.812 00:38:43.812 real 0m0.433s 00:38:43.812 user 0m0.206s 00:38:43.812 sys 0m0.223s 00:38:43.812 10:48:56 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:43.812 10:48:56 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:38:43.812 ************************************ 00:38:43.812 END TEST bdev_json_nonenclosed 00:38:43.812 ************************************ 00:38:43.812 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:43.812 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:38:43.812 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:43.812 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:43.812 ************************************ 00:38:43.812 START TEST bdev_json_nonarray 00:38:43.812 ************************************ 00:38:43.812 10:48:56 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:43.812 [2024-07-26 10:48:56.666218] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:43.812 [2024-07-26 10:48:56.666273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3624642 ] 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:44.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:44.071 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:44.071 [2024-07-26 10:48:56.799979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.071 [2024-07-26 10:48:56.843665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.071 [2024-07-26 10:48:56.843735] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:38:44.071 [2024-07-26 10:48:56.843750] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:44.071 [2024-07-26 10:48:56.843761] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:44.071 00:38:44.071 real 0m0.307s 00:38:44.071 user 0m0.154s 00:38:44.071 sys 0m0.150s 00:38:44.071 10:48:56 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:44.071 10:48:56 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:38:44.071 ************************************ 00:38:44.071 END TEST bdev_json_nonarray 00:38:44.071 ************************************ 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:38:44.071 10:48:56 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:38:44.071 00:38:44.071 real 1m11.246s 00:38:44.071 user 2m54.777s 00:38:44.071 sys 0m9.795s 00:38:44.071 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:44.071 10:48:56 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:38:44.071 ************************************ 00:38:44.071 END TEST blockdev_crypto_qat 00:38:44.071 ************************************ 00:38:44.330 10:48:57 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:38:44.330 10:48:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:44.330 10:48:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:44.330 10:48:57 -- common/autotest_common.sh@10 -- # set +x 00:38:44.330 ************************************ 00:38:44.330 START TEST chaining 00:38:44.330 ************************************ 00:38:44.330 10:48:57 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:38:44.330 * Looking for test storage... 00:38:44.330 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@7 -- # uname -s 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:38:44.330 10:48:57 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:44.330 10:48:57 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:44.330 10:48:57 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:44.330 10:48:57 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.330 10:48:57 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.330 10:48:57 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.330 10:48:57 chaining -- paths/export.sh@5 -- # export PATH 00:38:44.330 10:48:57 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@47 -- # : 0 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:38:44.330 10:48:57 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.330 10:48:57 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:44.330 10:48:57 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:44.330 10:48:57 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:38:44.330 10:48:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@296 -- # e810=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@297 -- # x722=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@298 -- # mlx=() 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:38:54.297 Found 0000:20:00.0 (0x8086 - 0x159b) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:38:54.297 Found 0000:20:00.1 (0x8086 - 0x159b) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:38:54.297 Found net devices under 0000:20:00.0: cvl_0_0 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:38:54.297 Found net devices under 0000:20:00.1: cvl_0_1 00:38:54.297 10:49:05 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:54.298 10:49:05 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:54.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:38:54.298 00:38:54.298 --- 10.0.0.2 ping statistics --- 00:38:54.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.298 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:38:54.298 00:38:54.298 --- 10.0.0.1 ping statistics --- 00:38:54.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.298 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@422 -- # return 0 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:54.298 10:49:06 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@481 -- # nvmfpid=3628868 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@482 -- # waitforlisten 3628868 00:38:54.298 10:49:06 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@831 -- # '[' -z 3628868 ']' 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:54.298 10:49:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.298 [2024-07-26 10:49:06.260205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:54.298 [2024-07-26 10:49:06.260263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:54.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.298 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:54.298 [2024-07-26 10:49:06.391895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.298 [2024-07-26 10:49:06.433893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.298 [2024-07-26 10:49:06.433941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:54.298 [2024-07-26 10:49:06.433954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.298 [2024-07-26 10:49:06.433966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.298 [2024-07-26 10:49:06.433975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.298 [2024-07-26 10:49:06.434001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.298 10:49:07 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:54.298 10:49:07 chaining -- common/autotest_common.sh@864 -- # return 0 00:38:54.298 10:49:07 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:54.298 10:49:07 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:54.298 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.298 10:49:07 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.298 10:49:07 chaining -- bdev/chaining.sh@69 -- # mktemp 00:38:54.298 10:49:07 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.15sMGQmKI4 00:38:54.298 10:49:07 chaining -- bdev/chaining.sh@69 -- # mktemp 00:38:54.298 10:49:07 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.gGjQZSmuC4 00:38:54.298 10:49:07 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:38:54.299 10:49:07 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:38:54.299 10:49:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.299 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.299 malloc0 00:38:54.299 true 00:38:54.299 true 00:38:54.557 [2024-07-26 10:49:07.201434] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:38:54.557 crypto0 00:38:54.557 [2024-07-26 10:49:07.209461] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:38:54.557 crypto1 00:38:54.557 [2024-07-26 10:49:07.217585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.557 [2024-07-26 10:49:07.233786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@85 -- # update_stats 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:54.557 10:49:07 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.15sMGQmKI4 bs=1K count=64 00:38:54.557 64+0 records in 00:38:54.557 64+0 records out 00:38:54.557 65536 bytes (66 kB, 64 KiB) copied, 0.00105028 s, 62.4 MB/s 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.15sMGQmKI4 --ob Nvme0n1 --bs 65536 --count 1 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@25 -- # local config 00:38:54.557 10:49:07 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:38:54.558 10:49:07 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:38:54.558 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:38:54.816 10:49:07 chaining -- bdev/chaining.sh@31 -- # config='{ 00:38:54.816 "subsystems": [ 00:38:54.816 { 00:38:54.816 "subsystem": "bdev", 00:38:54.816 "config": [ 00:38:54.816 { 00:38:54.816 "method": "bdev_nvme_attach_controller", 00:38:54.816 "params": { 00:38:54.816 "trtype": "tcp", 00:38:54.816 "adrfam": "IPv4", 00:38:54.816 "name": "Nvme0", 00:38:54.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.816 "traddr": "10.0.0.2", 00:38:54.816 "trsvcid": "4420" 00:38:54.816 } 00:38:54.816 }, 00:38:54.816 { 00:38:54.816 "method": "bdev_set_options", 00:38:54.816 "params": { 00:38:54.816 "bdev_auto_examine": false 00:38:54.816 } 00:38:54.816 } 00:38:54.816 ] 00:38:54.816 } 00:38:54.816 ] 00:38:54.816 }' 00:38:54.816 10:49:07 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.15sMGQmKI4 --ob Nvme0n1 --bs 65536 --count 1 00:38:54.816 10:49:07 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:38:54.816 "subsystems": [ 00:38:54.816 { 00:38:54.816 "subsystem": "bdev", 00:38:54.816 "config": [ 00:38:54.816 { 00:38:54.816 "method": "bdev_nvme_attach_controller", 00:38:54.816 "params": { 00:38:54.816 "trtype": "tcp", 00:38:54.816 "adrfam": "IPv4", 00:38:54.816 "name": "Nvme0", 00:38:54.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.816 "traddr": "10.0.0.2", 00:38:54.816 "trsvcid": "4420" 00:38:54.816 } 00:38:54.816 }, 00:38:54.816 { 00:38:54.816 "method": "bdev_set_options", 00:38:54.816 "params": { 00:38:54.816 "bdev_auto_examine": false 00:38:54.816 } 00:38:54.816 } 00:38:54.816 ] 00:38:54.816 } 00:38:54.816 ] 00:38:54.816 }' 00:38:54.816 [2024-07-26 10:49:07.532514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:54.816 [2024-07-26 10:49:07.532571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629049 ] 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:54.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:54.816 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:54.816 [2024-07-26 10:49:07.664281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.817 [2024-07-26 10:49:07.707982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.333  Copying: 64/64 [kB] (average 15 MBps) 00:38:55.333 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.333 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.333 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@96 -- # update_stats 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.592 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.592 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:55.593 10:49:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:55.593 10:49:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.593 10:49:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:55.593 10:49:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.gGjQZSmuC4 --ib Nvme0n1 --bs 65536 --count 1 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@25 -- # local config 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:38:55.852 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@31 -- # config='{ 00:38:55.852 "subsystems": [ 00:38:55.852 { 00:38:55.852 "subsystem": "bdev", 00:38:55.852 "config": [ 00:38:55.852 { 00:38:55.852 "method": "bdev_nvme_attach_controller", 00:38:55.852 "params": { 00:38:55.852 "trtype": "tcp", 00:38:55.852 "adrfam": "IPv4", 00:38:55.852 "name": "Nvme0", 00:38:55.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.852 "traddr": "10.0.0.2", 00:38:55.852 "trsvcid": "4420" 00:38:55.852 } 00:38:55.852 }, 00:38:55.852 { 00:38:55.852 "method": "bdev_set_options", 00:38:55.852 "params": { 00:38:55.852 "bdev_auto_examine": false 00:38:55.852 } 00:38:55.852 } 00:38:55.852 ] 00:38:55.852 } 00:38:55.852 ] 00:38:55.852 }' 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.gGjQZSmuC4 --ib Nvme0n1 --bs 65536 --count 1 00:38:55.852 10:49:08 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:38:55.852 "subsystems": [ 00:38:55.852 { 00:38:55.852 "subsystem": "bdev", 00:38:55.852 "config": [ 00:38:55.852 { 00:38:55.852 "method": "bdev_nvme_attach_controller", 00:38:55.852 "params": { 00:38:55.852 "trtype": "tcp", 00:38:55.852 "adrfam": "IPv4", 00:38:55.852 "name": "Nvme0", 00:38:55.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.852 "traddr": "10.0.0.2", 00:38:55.852 "trsvcid": "4420" 00:38:55.852 } 00:38:55.852 }, 00:38:55.852 { 00:38:55.852 "method": "bdev_set_options", 00:38:55.852 "params": { 00:38:55.852 "bdev_auto_examine": false 00:38:55.852 } 00:38:55.852 } 00:38:55.852 ] 00:38:55.852 } 00:38:55.852 ] 00:38:55.852 }' 00:38:55.852 [2024-07-26 10:49:08.604419] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:55.852 [2024-07-26 10:49:08.604480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629333 ] 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:55.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:55.852 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:55.852 [2024-07-26 10:49:08.738366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.111 [2024-07-26 10:49:08.780573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.628  Copying: 64/64 [kB] (average 31 MBps) 00:38:56.628 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:56.628 10:49:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:38:56.628 10:49:09 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.15sMGQmKI4 /tmp/tmp.gGjQZSmuC4 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@25 -- # local config 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:38:56.629 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@31 -- # config='{ 00:38:56.629 "subsystems": [ 00:38:56.629 { 00:38:56.629 "subsystem": "bdev", 00:38:56.629 "config": [ 00:38:56.629 { 00:38:56.629 "method": "bdev_nvme_attach_controller", 00:38:56.629 "params": { 00:38:56.629 "trtype": "tcp", 00:38:56.629 "adrfam": "IPv4", 00:38:56.629 "name": "Nvme0", 00:38:56.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.629 "traddr": "10.0.0.2", 00:38:56.629 "trsvcid": "4420" 00:38:56.629 } 00:38:56.629 }, 00:38:56.629 { 00:38:56.629 "method": "bdev_set_options", 00:38:56.629 "params": { 00:38:56.629 "bdev_auto_examine": false 00:38:56.629 } 00:38:56.629 } 00:38:56.629 ] 00:38:56.629 } 00:38:56.629 ] 00:38:56.629 }' 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:38:56.629 "subsystems": [ 00:38:56.629 { 00:38:56.629 "subsystem": "bdev", 00:38:56.629 "config": [ 00:38:56.629 { 00:38:56.629 "method": "bdev_nvme_attach_controller", 00:38:56.629 "params": { 00:38:56.629 "trtype": "tcp", 00:38:56.629 "adrfam": "IPv4", 00:38:56.629 "name": "Nvme0", 00:38:56.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.629 "traddr": "10.0.0.2", 00:38:56.629 "trsvcid": "4420" 00:38:56.629 } 00:38:56.629 }, 00:38:56.629 { 00:38:56.629 "method": "bdev_set_options", 00:38:56.629 "params": { 00:38:56.629 "bdev_auto_examine": false 00:38:56.629 } 00:38:56.629 } 00:38:56.629 ] 00:38:56.629 } 00:38:56.629 ] 00:38:56.629 }' 00:38:56.629 10:49:09 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:38:56.888 [2024-07-26 10:49:09.578555] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:56.888 [2024-07-26 10:49:09.578617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629544 ] 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:56.888 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:56.888 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:56.888 [2024-07-26 10:49:09.711898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.888 [2024-07-26 10:49:09.755404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.455  Copying: 64/64 [kB] (average 20 MBps) 00:38:57.455 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@106 -- # update_stats 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:57.455 10:49:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:57.455 10:49:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:57.714 10:49:10 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.15sMGQmKI4 --ob Nvme0n1 --bs 4096 --count 16 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@25 -- # local config 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:38:57.714 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@31 -- # config='{ 00:38:57.714 "subsystems": [ 00:38:57.714 { 00:38:57.714 "subsystem": "bdev", 00:38:57.714 "config": [ 00:38:57.714 { 00:38:57.714 "method": "bdev_nvme_attach_controller", 00:38:57.714 "params": { 00:38:57.714 "trtype": "tcp", 00:38:57.714 "adrfam": "IPv4", 00:38:57.714 "name": "Nvme0", 00:38:57.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.714 "traddr": "10.0.0.2", 00:38:57.714 "trsvcid": "4420" 00:38:57.714 } 00:38:57.714 }, 00:38:57.714 { 00:38:57.714 "method": "bdev_set_options", 00:38:57.714 "params": { 00:38:57.714 "bdev_auto_examine": false 00:38:57.714 } 00:38:57.714 } 00:38:57.714 ] 00:38:57.714 } 00:38:57.714 ] 00:38:57.714 }' 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.15sMGQmKI4 --ob Nvme0n1 --bs 4096 --count 16 00:38:57.714 10:49:10 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:38:57.714 "subsystems": [ 00:38:57.714 { 00:38:57.714 "subsystem": "bdev", 00:38:57.714 "config": [ 00:38:57.714 { 00:38:57.714 "method": "bdev_nvme_attach_controller", 00:38:57.714 "params": { 00:38:57.714 "trtype": "tcp", 00:38:57.714 "adrfam": "IPv4", 00:38:57.714 "name": "Nvme0", 00:38:57.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.714 "traddr": "10.0.0.2", 00:38:57.714 "trsvcid": "4420" 00:38:57.714 } 00:38:57.714 }, 00:38:57.714 { 00:38:57.714 "method": "bdev_set_options", 00:38:57.714 "params": { 00:38:57.714 "bdev_auto_examine": false 00:38:57.714 } 00:38:57.714 } 00:38:57.714 ] 00:38:57.714 } 00:38:57.714 ] 00:38:57.714 }' 00:38:57.715 [2024-07-26 10:49:10.566830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:57.715 [2024-07-26 10:49:10.566891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629653 ] 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.973 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:57.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:57.974 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:57.974 [2024-07-26 10:49:10.700779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.974 [2024-07-26 10:49:10.742840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.491  Copying: 64/64 [kB] (average 31 MBps) 00:38:58.491 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:58.491 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.491 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:58.492 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@114 -- # update_stats 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:58.492 10:49:11 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:58.492 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.492 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.492 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:58.750 10:49:11 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@117 -- # : 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.gGjQZSmuC4 --ib Nvme0n1 --bs 4096 --count 16 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@25 -- # local config 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:38:58.750 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:38:58.750 10:49:11 chaining -- bdev/chaining.sh@31 -- # config='{ 00:38:58.750 "subsystems": [ 00:38:58.750 { 00:38:58.750 "subsystem": "bdev", 00:38:58.750 "config": [ 00:38:58.751 { 00:38:58.751 "method": "bdev_nvme_attach_controller", 00:38:58.751 "params": { 00:38:58.751 "trtype": "tcp", 00:38:58.751 "adrfam": "IPv4", 00:38:58.751 "name": "Nvme0", 00:38:58.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.751 "traddr": "10.0.0.2", 00:38:58.751 "trsvcid": "4420" 00:38:58.751 } 00:38:58.751 }, 00:38:58.751 { 00:38:58.751 "method": "bdev_set_options", 00:38:58.751 "params": { 00:38:58.751 "bdev_auto_examine": false 00:38:58.751 } 00:38:58.751 } 00:38:58.751 ] 00:38:58.751 } 00:38:58.751 ] 00:38:58.751 }' 00:38:58.751 10:49:11 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.gGjQZSmuC4 --ib Nvme0n1 --bs 4096 --count 16 00:38:58.751 10:49:11 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:38:58.751 "subsystems": [ 00:38:58.751 { 00:38:58.751 "subsystem": "bdev", 00:38:58.751 "config": [ 00:38:58.751 { 00:38:58.751 "method": "bdev_nvme_attach_controller", 00:38:58.751 "params": { 00:38:58.751 "trtype": "tcp", 00:38:58.751 "adrfam": "IPv4", 00:38:58.751 "name": "Nvme0", 00:38:58.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.751 "traddr": "10.0.0.2", 00:38:58.751 "trsvcid": "4420" 00:38:58.751 } 00:38:58.751 }, 00:38:58.751 { 00:38:58.751 "method": "bdev_set_options", 00:38:58.751 "params": { 00:38:58.751 "bdev_auto_examine": false 00:38:58.751 } 00:38:58.751 } 00:38:58.751 ] 00:38:58.751 } 00:38:58.751 ] 00:38:58.751 }' 00:38:58.751 [2024-07-26 10:49:11.634390] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:38:58.751 [2024-07-26 10:49:11.634454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629944 ] 00:38:59.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:59.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:59.010 [2024-07-26 10:49:11.771587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.010 [2024-07-26 10:49:11.814975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.527  Copying: 64/64 [kB] (average 695 kBps) 00:38:59.527 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # opcode= 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:38:59.527 10:49:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.527 10:49:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:38:59.528 10:49:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:38:59.528 10:49:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:38:59.528 10:49:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.528 10:49:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:38:59.528 10:49:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.15sMGQmKI4 /tmp/tmp.gGjQZSmuC4 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.15sMGQmKI4 /tmp/tmp.gGjQZSmuC4 00:38:59.786 10:49:12 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@117 -- # sync 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@120 -- # set +e 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:59.786 rmmod nvme_tcp 00:38:59.786 rmmod nvme_fabrics 00:38:59.786 rmmod nvme_keyring 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@124 -- # set -e 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@125 -- # return 0 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@489 -- # '[' -n 3628868 ']' 00:38:59.786 10:49:12 chaining -- nvmf/common.sh@490 -- # killprocess 3628868 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@950 -- # '[' -z 3628868 ']' 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@954 -- # kill -0 3628868 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@955 -- # uname 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3628868 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3628868' 00:38:59.786 killing process with pid 3628868 00:38:59.786 10:49:12 chaining -- common/autotest_common.sh@969 -- # kill 3628868 00:38:59.787 10:49:12 chaining -- common/autotest_common.sh@974 -- # wait 3628868 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:00.045 10:49:12 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.045 10:49:12 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:00.045 10:49:12 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.015 10:49:14 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:02.015 10:49:14 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:39:02.015 10:49:14 chaining -- bdev/chaining.sh@132 -- # bperfpid=3630532 00:39:02.015 10:49:14 chaining -- bdev/chaining.sh@134 -- # waitforlisten 3630532 00:39:02.015 10:49:14 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@831 -- # '[' -z 3630532 ']' 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.015 10:49:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:02.015 [2024-07-26 10:49:14.912174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:39:02.015 [2024-07-26 10:49:14.912235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630532 ] 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:02.274 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.274 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:02.275 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:02.275 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:02.275 [2024-07-26 10:49:15.046587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.275 [2024-07-26 10:49:15.090675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.211 10:49:15 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.211 10:49:15 chaining -- common/autotest_common.sh@864 -- # return 0 00:39:03.211 10:49:15 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:39:03.211 10:49:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.211 10:49:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:03.211 malloc0 00:39:03.211 true 00:39:03.211 true 00:39:03.211 [2024-07-26 10:49:15.947653] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:39:03.211 crypto0 00:39:03.211 [2024-07-26 10:49:15.955676] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:39:03.211 crypto1 00:39:03.211 10:49:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.211 10:49:15 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:03.211 Running I/O for 5 seconds... 00:39:08.477 00:39:08.477 Latency(us) 00:39:08.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.477 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:39:08.477 Verification LBA range: start 0x0 length 0x2000 00:39:08.477 crypto1 : 5.01 12446.35 48.62 0.00 0.00 20507.99 5111.81 13421.77 00:39:08.477 =================================================================================================================== 00:39:08.477 Total : 12446.35 48.62 0.00 0.00 20507.99 5111.81 13421.77 00:39:08.477 0 00:39:08.477 10:49:21 chaining -- bdev/chaining.sh@146 -- # killprocess 3630532 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@950 -- # '[' -z 3630532 ']' 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@954 -- # kill -0 3630532 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@955 -- # uname 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3630532 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3630532' 00:39:08.477 killing process with pid 3630532 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@969 -- # kill 3630532 00:39:08.477 Received shutdown signal, test time was about 5.000000 seconds 00:39:08.477 00:39:08.477 Latency(us) 00:39:08.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.477 =================================================================================================================== 00:39:08.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@974 -- # wait 3630532 00:39:08.477 10:49:21 chaining -- bdev/chaining.sh@152 -- # bperfpid=3631587 00:39:08.477 10:49:21 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:39:08.477 10:49:21 chaining -- bdev/chaining.sh@154 -- # waitforlisten 3631587 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@831 -- # '[' -z 3631587 ']' 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:08.477 10:49:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:08.735 [2024-07-26 10:49:21.401718] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:39:08.735 [2024-07-26 10:49:21.401781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631587 ] 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.735 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:08.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.736 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:08.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.736 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:08.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:08.736 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:08.736 [2024-07-26 10:49:21.533293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.736 [2024-07-26 10:49:21.578101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.669 10:49:22 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:09.669 10:49:22 chaining -- common/autotest_common.sh@864 -- # return 0 00:39:09.669 10:49:22 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:39:09.669 10:49:22 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.669 10:49:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:09.669 malloc0 00:39:09.669 true 00:39:09.669 true 00:39:09.669 [2024-07-26 10:49:22.442378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:39:09.669 [2024-07-26 10:49:22.442424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.669 [2024-07-26 10:49:22.442444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf7d170 00:39:09.669 [2024-07-26 10:49:22.442456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.669 [2024-07-26 10:49:22.443427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.669 [2024-07-26 10:49:22.443451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:39:09.669 pt0 00:39:09.669 [2024-07-26 10:49:22.450407] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:39:09.669 crypto0 00:39:09.669 [2024-07-26 10:49:22.458427] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:39:09.670 crypto1 00:39:09.670 10:49:22 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.670 10:49:22 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:09.670 Running I/O for 5 seconds... 00:39:14.932 00:39:14.932 Latency(us) 00:39:14.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.932 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:39:14.932 Verification LBA range: start 0x0 length 0x2000 00:39:14.932 crypto1 : 5.01 9713.18 37.94 0.00 0.00 26280.61 1402.47 15938.36 00:39:14.932 =================================================================================================================== 00:39:14.932 Total : 9713.18 37.94 0.00 0.00 26280.61 1402.47 15938.36 00:39:14.932 0 00:39:14.932 10:49:27 chaining -- bdev/chaining.sh@167 -- # killprocess 3631587 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@950 -- # '[' -z 3631587 ']' 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@954 -- # kill -0 3631587 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@955 -- # uname 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3631587 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3631587' 00:39:14.932 killing process with pid 3631587 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@969 -- # kill 3631587 00:39:14.932 Received shutdown signal, test time was about 5.000000 seconds 00:39:14.932 00:39:14.932 Latency(us) 00:39:14.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.932 =================================================================================================================== 00:39:14.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:14.932 10:49:27 chaining -- common/autotest_common.sh@974 -- # wait 3631587 00:39:15.190 10:49:27 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:39:15.190 10:49:27 chaining -- bdev/chaining.sh@170 -- # killprocess 3631587 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@950 -- # '[' -z 3631587 ']' 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@954 -- # kill -0 3631587 00:39:15.190 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3631587) - No such process 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 3631587 is not found' 00:39:15.190 Process with pid 3631587 is not found 00:39:15.190 10:49:27 chaining -- bdev/chaining.sh@171 -- # wait 3631587 00:39:15.190 10:49:27 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:39:15.190 10:49:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@296 -- # e810=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@297 -- # x722=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@298 -- # mlx=() 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:15.190 10:49:27 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:39:15.191 Found 0000:20:00.0 (0x8086 - 0x159b) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:39:15.191 Found 0000:20:00.1 (0x8086 - 0x159b) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:39:15.191 Found net devices under 0000:20:00.0: cvl_0_0 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:39:15.191 Found net devices under 0000:20:00.1: cvl_0_1 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:15.191 10:49:27 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:15.191 10:49:28 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:15.191 10:49:28 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:15.191 10:49:28 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:15.191 10:49:28 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:15.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:39:15.449 00:39:15.449 --- 10.0.0.2 ping statistics --- 00:39:15.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.449 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:15.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:39:15.449 00:39:15.449 --- 10.0.0.1 ping statistics --- 00:39:15.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.449 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@422 -- # return 0 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:15.449 10:49:28 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@481 -- # nvmfpid=3632681 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@482 -- # waitforlisten 3632681 00:39:15.449 10:49:28 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@831 -- # '[' -z 3632681 ']' 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:15.449 10:49:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:15.449 [2024-07-26 10:49:28.295119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:39:15.449 [2024-07-26 10:49:28.295183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:15.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:15.708 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:15.708 [2024-07-26 10:49:28.425861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.708 [2024-07-26 10:49:28.468875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.708 [2024-07-26 10:49:28.468918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.708 [2024-07-26 10:49:28.468931] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.708 [2024-07-26 10:49:28.468943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.708 [2024-07-26 10:49:28.468953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.708 [2024-07-26 10:49:28.468978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@864 -- # return 0 00:39:16.643 10:49:29 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:16.643 10:49:29 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.643 10:49:29 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:16.643 malloc0 00:39:16.643 [2024-07-26 10:49:29.256835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.643 [2024-07-26 10:49:29.273033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.643 10:49:29 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.643 10:49:29 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:39:16.643 10:49:29 chaining -- bdev/chaining.sh@189 -- # bperfpid=3632958 00:39:16.644 10:49:29 chaining -- bdev/chaining.sh@191 -- # waitforlisten 3632958 /var/tmp/bperf.sock 00:39:16.644 10:49:29 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@831 -- # '[' -z 3632958 ']' 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:16.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:16.644 10:49:29 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:16.644 [2024-07-26 10:49:29.341304] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:39:16.644 [2024-07-26 10:49:29.341360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632958 ] 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:16.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:16.644 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:16.644 [2024-07-26 10:49:29.475836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.644 [2024-07-26 10:49:29.520287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.579 10:49:30 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:17.579 10:49:30 chaining -- common/autotest_common.sh@864 -- # return 0 00:39:17.579 10:49:30 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:39:17.579 10:49:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:39:17.837 [2024-07-26 10:49:30.634905] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:39:17.837 nvme0n1 00:39:17.837 true 00:39:17.837 crypto0 00:39:17.837 10:49:30 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:18.095 Running I/O for 5 seconds... 00:39:23.398 00:39:23.398 Latency(us) 00:39:23.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.398 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:39:23.398 Verification LBA range: start 0x0 length 0x2000 00:39:23.398 crypto0 : 5.02 9534.30 37.24 0.00 0.00 26769.27 4168.09 22020.10 00:39:23.398 =================================================================================================================== 00:39:23.398 Total : 9534.30 37.24 0.00 0.00 26769.27 4168.09 22020.10 00:39:23.398 0 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@39 -- # opcode= 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:39:23.398 10:49:35 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@205 -- # sequence=95726 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:39:23.398 10:49:36 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@206 -- # encrypt=47863 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:23.656 10:49:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:39:23.657 10:49:36 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:23.657 10:49:36 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:23.657 10:49:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@207 -- # decrypt=47863 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:39:23.915 10:49:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:23.916 10:49:36 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:39:23.916 10:49:36 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:23.916 10:49:36 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:23.916 10:49:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:39:24.175 10:49:36 chaining -- bdev/chaining.sh@208 -- # crc32c=95726 00:39:24.175 10:49:36 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:39:24.175 10:49:36 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:39:24.175 10:49:36 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:39:24.175 10:49:36 chaining -- bdev/chaining.sh@214 -- # killprocess 3632958 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@950 -- # '[' -z 3632958 ']' 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@954 -- # kill -0 3632958 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@955 -- # uname 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3632958 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3632958' 00:39:24.175 killing process with pid 3632958 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@969 -- # kill 3632958 00:39:24.175 Received shutdown signal, test time was about 5.000000 seconds 00:39:24.175 00:39:24.175 Latency(us) 00:39:24.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.175 =================================================================================================================== 00:39:24.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:24.175 10:49:36 chaining -- common/autotest_common.sh@974 -- # wait 3632958 00:39:24.433 10:49:37 chaining -- bdev/chaining.sh@219 -- # bperfpid=3634129 00:39:24.433 10:49:37 chaining -- bdev/chaining.sh@221 -- # waitforlisten 3634129 /var/tmp/bperf.sock 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@831 -- # '[' -z 3634129 ']' 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:24.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:24.433 10:49:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:24.433 10:49:37 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:39:24.433 [2024-07-26 10:49:37.213214] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 22.11.4 initialization... 00:39:24.433 [2024-07-26 10:49:37.213361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634129 ] 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:24.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:24.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:24.692 [2024-07-26 10:49:37.427495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.692 [2024-07-26 10:49:37.471006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.257 10:49:38 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:25.257 10:49:38 chaining -- common/autotest_common.sh@864 -- # return 0 00:39:25.257 10:49:38 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:39:25.257 10:49:38 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:39:25.515 [2024-07-26 10:49:38.379947] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:39:25.515 nvme0n1 00:39:25.515 true 00:39:25.515 crypto0 00:39:25.515 10:49:38 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:25.773 Running I/O for 5 seconds... 00:39:31.036 00:39:31.036 Latency(us) 00:39:31.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.036 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:39:31.036 Verification LBA range: start 0x0 length 0x200 00:39:31.036 crypto0 : 5.01 1872.28 117.02 0.00 0.00 16744.05 969.93 21390.95 00:39:31.036 =================================================================================================================== 00:39:31.036 Total : 1872.28 117.02 0.00 0.00 16744.05 969.93 21390.95 00:39:31.036 0 00:39:31.036 10:49:43 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:39:31.036 10:49:43 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:39:31.036 10:49:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # opcode= 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@233 -- # sequence=18752 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:31.037 10:49:43 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@234 -- # encrypt=9376 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:39:31.294 10:49:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:31.551 10:49:44 chaining -- bdev/chaining.sh@235 -- # decrypt=9376 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:31.552 10:49:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:39:31.809 10:49:44 chaining -- bdev/chaining.sh@236 -- # crc32c=18752 00:39:31.809 10:49:44 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:39:31.809 10:49:44 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:39:31.809 10:49:44 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:39:31.809 10:49:44 chaining -- bdev/chaining.sh@242 -- # killprocess 3634129 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@950 -- # '[' -z 3634129 ']' 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@954 -- # kill -0 3634129 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@955 -- # uname 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3634129 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3634129' 00:39:31.809 killing process with pid 3634129 00:39:31.809 10:49:44 chaining -- common/autotest_common.sh@969 -- # kill 3634129 00:39:31.810 Received shutdown signal, test time was about 5.000000 seconds 00:39:31.810 00:39:31.810 Latency(us) 00:39:31.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.810 =================================================================================================================== 00:39:31.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:31.810 10:49:44 chaining -- common/autotest_common.sh@974 -- # wait 3634129 00:39:32.067 10:49:44 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:39:32.067 10:49:44 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:32.067 10:49:44 chaining -- nvmf/common.sh@117 -- # sync 00:39:32.067 10:49:44 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:32.067 10:49:44 chaining -- nvmf/common.sh@120 -- # set +e 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:32.068 rmmod nvme_tcp 00:39:32.068 rmmod nvme_fabrics 00:39:32.068 rmmod nvme_keyring 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@124 -- # set -e 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@125 -- # return 0 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@489 -- # '[' -n 3632681 ']' 00:39:32.068 10:49:44 chaining -- nvmf/common.sh@490 -- # killprocess 3632681 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@950 -- # '[' -z 3632681 ']' 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@954 -- # kill -0 3632681 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@955 -- # uname 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3632681 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3632681' 00:39:32.068 killing process with pid 3632681 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@969 -- # kill 3632681 00:39:32.068 10:49:44 chaining -- common/autotest_common.sh@974 -- # wait 3632681 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:32.326 10:49:45 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.326 10:49:45 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:32.326 10:49:45 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.229 10:49:47 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:34.229 10:49:47 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:39:34.229 00:39:34.229 real 0m50.075s 00:39:34.229 user 0m59.537s 00:39:34.229 sys 0m13.451s 00:39:34.229 10:49:47 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:34.229 10:49:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:39:34.229 ************************************ 00:39:34.229 END TEST chaining 00:39:34.229 ************************************ 00:39:34.488 10:49:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:34.488 10:49:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:34.488 10:49:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:34.488 10:49:47 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:39:34.488 10:49:47 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:39:34.488 10:49:47 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:39:34.488 10:49:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:34.488 10:49:47 -- common/autotest_common.sh@10 -- # set +x 00:39:34.488 10:49:47 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:39:34.488 10:49:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:34.488 10:49:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:34.488 10:49:47 -- common/autotest_common.sh@10 -- # set +x 00:39:41.053 INFO: APP EXITING 00:39:41.053 INFO: killing all VMs 00:39:41.053 INFO: killing vhost app 00:39:41.053 INFO: EXIT DONE 00:39:45.242 Waiting for block devices as requested 00:39:45.242 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:45.242 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:45.242 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:45.534 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:45.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:45.534 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:45.793 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:45.793 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:45.793 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:45.793 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:46.051 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:46.051 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:46.051 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:46.309 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:46.309 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:46.309 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:46.567 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:39:51.834 Cleaning 00:39:51.834 Removing: /var/run/dpdk/spdk0/config 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:51.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:51.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:51.834 Removing: /dev/shm/nvmf_trace.0 00:39:51.834 Removing: /dev/shm/spdk_tgt_trace.pid3293132 00:39:51.834 Removing: /var/run/dpdk/spdk0 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3288083 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3291855 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3293132 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3293767 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3294840 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3295120 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3296178 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3296306 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3296612 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3300199 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3302169 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3302482 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3303020 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3303390 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3303717 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3303998 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3304286 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3304589 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3305191 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3308576 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3308802 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3309100 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3309411 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3309511 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3309579 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3309859 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3310142 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3310421 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3310707 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3310986 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3311272 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3311551 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3311832 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3312123 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3312400 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3312687 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3312965 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3313250 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3313530 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3313815 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3314092 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3314382 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3314630 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3314896 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3315148 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3315469 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3315802 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3316092 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3316621 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3317075 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3317742 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3318079 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3318590 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3318652 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3319010 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3319633 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3319930 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3320204 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3324801 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3326903 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3329084 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3330158 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3331498 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3331857 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3332065 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3332088 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3336939 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3337524 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3338822 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3339108 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3347944 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3350069 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3351707 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3356603 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3358829 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3359853 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3364872 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3367744 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3368902 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3380292 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3382950 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3384113 00:39:51.834 Removing: /var/run/dpdk/spdk_pid3396167 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3398698 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3399864 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3411480 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3415389 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3416778 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3430270 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3433110 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3434524 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3447591 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3450558 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3451979 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3465568 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3470066 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3471425 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3472795 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3476319 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3482574 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3485587 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3491743 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3495679 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3502205 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3505423 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3513015 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3515887 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3523715 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3526416 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3533703 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3536413 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3541613 00:39:51.835 Removing: /var/run/dpdk/spdk_pid3542059 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3542414 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3542946 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3543552 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3544425 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3545366 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3545727 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3547864 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3550063 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3552128 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3554383 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3562686 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3568144 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3570282 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3572467 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3574647 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3576408 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3585142 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3590534 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3591313 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3591833 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3594363 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3596774 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3599027 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3600357 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3601938 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3602502 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3602683 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3602832 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3603119 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3603267 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3604644 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3606633 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3608592 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3609434 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3610500 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3610776 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3610806 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3610835 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3612008 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3612752 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3613293 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3616199 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3618702 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3620952 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3622280 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3623868 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3624425 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3624642 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3629049 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3629333 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3629544 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3629653 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3629944 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3630532 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3631587 00:39:52.094 Removing: /var/run/dpdk/spdk_pid3632958 00:39:52.353 Removing: /var/run/dpdk/spdk_pid3634129 00:39:52.353 Clean 00:39:52.353 10:50:05 -- common/autotest_common.sh@1451 -- # return 0 00:39:52.353 10:50:05 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:39:52.353 10:50:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:52.353 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:39:52.353 10:50:05 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:39:52.353 10:50:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:52.353 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:39:52.353 10:50:05 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:39:52.353 10:50:05 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:39:52.353 10:50:05 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:39:52.353 10:50:05 -- spdk/autotest.sh@395 -- # hash lcov 00:39:52.353 10:50:05 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:52.353 10:50:05 -- spdk/autotest.sh@397 -- # hostname 00:39:52.353 10:50:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:39:52.612 geninfo: WARNING: invalid characters removed from testname! 00:40:19.144 10:50:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:22.435 10:50:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:24.969 10:50:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:26.907 10:50:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:33.483 10:50:45 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:34.859 10:50:47 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:40:37.391 10:50:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:37.391 10:50:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:40:37.391 10:50:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:37.391 10:50:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.391 10:50:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.391 10:50:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.391 10:50:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.391 10:50:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.391 10:50:50 -- paths/export.sh@5 -- $ export PATH 00:40:37.391 10:50:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.391 10:50:50 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:37.391 10:50:50 -- common/autobuild_common.sh@447 -- $ date +%s 00:40:37.391 10:50:50 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721983850.XXXXXX 00:40:37.391 10:50:50 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721983850.splZ7W 00:40:37.391 10:50:50 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:40:37.391 10:50:50 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:40:37.391 10:50:50 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/crypto-phy-autotest/dpdk/build 00:40:37.391 10:50:50 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk' 00:40:37.391 10:50:50 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:37.391 10:50:50 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/dpdk --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:37.391 10:50:50 -- common/autobuild_common.sh@463 -- $ get_config_params 00:40:37.391 10:50:50 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:40:37.391 10:50:50 -- common/autotest_common.sh@10 -- $ set +x 00:40:37.391 10:50:50 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/crypto-phy-autotest/dpdk/build' 00:40:37.391 10:50:50 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:40:37.391 10:50:50 -- pm/common@17 -- $ local monitor 00:40:37.391 10:50:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.391 10:50:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.391 10:50:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.391 10:50:50 -- pm/common@21 -- $ date +%s 00:40:37.391 10:50:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:37.391 10:50:50 -- pm/common@21 -- $ date +%s 00:40:37.391 10:50:50 -- pm/common@25 -- $ sleep 1 00:40:37.391 10:50:50 -- pm/common@21 -- $ date +%s 00:40:37.391 10:50:50 -- pm/common@21 -- $ date +%s 00:40:37.391 10:50:50 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721983850 00:40:37.391 10:50:50 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721983850 00:40:37.391 10:50:50 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721983850 00:40:37.391 10:50:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721983850 00:40:37.391 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721983850_collect-vmstat.pm.log 00:40:37.650 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721983850_collect-cpu-load.pm.log 00:40:37.650 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721983850_collect-cpu-temp.pm.log 00:40:37.650 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721983850_collect-bmc-pm.bmc.pm.log 00:40:38.587 10:50:51 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:40:38.587 10:50:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:40:38.587 10:50:51 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:38.587 10:50:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:40:38.587 10:50:51 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:40:38.587 10:50:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:40:38.587 10:50:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:40:38.587 10:50:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:38.587 10:50:51 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:40:38.587 10:50:51 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:40:38.587 10:50:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:40:38.587 10:50:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:38.587 10:50:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:38.587 10:50:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:38.587 10:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:38.587 10:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:38.587 10:50:51 -- pm/common@44 -- $ pid=3647512 00:40:38.587 10:50:51 -- pm/common@50 -- $ kill -TERM 3647512 00:40:38.587 10:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:38.587 10:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:38.587 10:50:51 -- pm/common@44 -- $ pid=3647514 00:40:38.587 10:50:51 -- pm/common@50 -- $ kill -TERM 3647514 00:40:38.587 10:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:38.587 10:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:38.587 10:50:51 -- pm/common@44 -- $ pid=3647516 00:40:38.587 10:50:51 -- pm/common@50 -- $ kill -TERM 3647516 00:40:38.587 10:50:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:38.587 10:50:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:38.587 10:50:51 -- pm/common@44 -- $ pid=3647538 00:40:38.587 10:50:51 -- pm/common@50 -- $ sudo -E kill -TERM 3647538 00:40:38.587 + [[ -n 3119300 ]] 00:40:38.587 + sudo kill 3119300 00:40:38.598 [Pipeline] } 00:40:38.619 [Pipeline] // stage 00:40:38.624 [Pipeline] } 00:40:38.643 [Pipeline] // timeout 00:40:38.649 [Pipeline] } 00:40:38.668 [Pipeline] // catchError 00:40:38.674 [Pipeline] } 00:40:38.693 [Pipeline] // wrap 00:40:38.700 [Pipeline] } 00:40:38.717 [Pipeline] // catchError 00:40:38.727 [Pipeline] stage 00:40:38.730 [Pipeline] { (Epilogue) 00:40:38.746 [Pipeline] catchError 00:40:38.748 [Pipeline] { 00:40:38.764 [Pipeline] echo 00:40:38.766 Cleanup processes 00:40:38.772 [Pipeline] sh 00:40:39.056 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:39.056 3647618 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:40:39.056 3647959 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:39.071 [Pipeline] sh 00:40:39.355 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:39.355 ++ grep -v 'sudo pgrep' 00:40:39.355 ++ awk '{print $1}' 00:40:39.355 + sudo kill -9 3647618 00:40:39.367 [Pipeline] sh 00:40:39.649 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:39.649 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:40:47.789 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:40:54.367 [Pipeline] sh 00:40:54.652 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:54.652 Artifacts sizes are good 00:40:54.666 [Pipeline] archiveArtifacts 00:40:54.673 Archiving artifacts 00:40:54.836 [Pipeline] sh 00:40:55.121 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:40:55.135 [Pipeline] cleanWs 00:40:55.144 [WS-CLEANUP] Deleting project workspace... 00:40:55.145 [WS-CLEANUP] Deferred wipeout is used... 00:40:55.151 [WS-CLEANUP] done 00:40:55.153 [Pipeline] } 00:40:55.176 [Pipeline] // catchError 00:40:55.189 [Pipeline] sh 00:40:55.471 + logger -p user.info -t JENKINS-CI 00:40:55.481 [Pipeline] } 00:40:55.499 [Pipeline] // stage 00:40:55.505 [Pipeline] } 00:40:55.523 [Pipeline] // node 00:40:55.529 [Pipeline] End of Pipeline 00:40:55.569 Finished: SUCCESS